Everyone is trying to compare AI companies with something that happened in the past, but I don't think we can predict much from that.
GPUs are not railroads or fiber optics.
The cost structure of ChatGPT and other LLM based services is entirely different than web, they are very expensive to build but also cost a lot to serve.
Companies like Meta, Microsoft, Amazon, Google would all survive if their massive investment does not pay off.
On the other hand, OpenAI, Anthropic and others could be soon find themselves in a difficult position and be at the mercy of Nvidia.
Unlike railroads and fibre, all the best compute in 2025 will be lacklustre in 2027. It won’t retain much value in the same way as the infrastructure of previous bubbles did?
> Unlike railroads and fibre, all the best compute in 2025 will be lacklustre in 2027.
I definitely don't think compute is anything like railroads and fibre, but I'm not so sure compute will continue it's efficiency gains of the past. Power consumption for these chips is climbing fast, lots of gains are from better hardware support for 8bit/4bit precision, I believe yields are getting harder to achieve as things get much smaller.
Betting against compute getting better/cheaper/faster is probably a bad idea, but fundamental improvements I think will be a lot slower over the next decade as shrinking gets a lot harder.
>> Unlike railroads and fibre, all the best compute in 2025 will be lacklustre in 2027.
> I definitely don't think compute is anything like railroads and fibre, but I'm not so sure compute will continue it's efficiency gains of the past. Power consumption for these chips is climbing fast, lots of gains are from better hardware support for 8bit/4bit precision, I believe yields are getting harder to achieve as things get much smaller.
I'm no expert, buy my understanding is that as feature sizes shrink, semiconductors become more prone to failure over time. Those GPUs probably aren't going to all fry themselves in two years, but even if GPUs stagnate, chip longevity may limit the medium/long term value of the (massive) investment.
Unfortunately the chips themselves probably won’t physically last much longer than that under the workloads they are being put to. So, yes, they won’t be totally obsolete as technology in 2028, but they may still have to be replaced.
Yeah - I think that the extremely fast depreciation just due to wear and use on GPUs is pretty unappreciated right now. So you've spent 300 mil on a brand new data center - congrats - you'll need to pay off that loan and somehow raise another 100 mil to actually maintain that capacity for three years based on chip replacement alone.
There is an absolute glut of cheap compute available right now due to VC and other funds dumping into the industry (take advantage of it while it exists!) but I'm pretty sure Wall St. will balk when they realize the continued costs of maintaining that compute and look at the revenue that expenditure is generating. People think of chips as a piece of infrastructure - you buy a personal computer and it'll keep chugging for a decade without issue in most case - but GPUs are essentially consumables - they're an input to producing the compute a data center sells that needs constant restocking - rather than a one-time investment.
Yep, we are (unfortunately) still running on railroad infrastructure built a century ago. The amortization periods on that spending is ridiculously long.
Effectively every single H100 in existence now will be e-waste in 5 years or less. Not exactly railroad infrastructure here, or even dark fiber.
> Yep, we are (unfortunately) still running on railroad infrastructure built a century ago.
That which survived, at least. A whole lot of rail infrastructure was not viable and soon became waste of its own. There was, at one time, ten rail lines around my parts, operated by six different railway companies. Only one of them remains fully intact to this day. One other line retained a short section that is still standing, which is now being used for car storage, but was mostly dismantled. The rest are completely gone.
When we look back in 100 years, the total amortization cost for the "winner" won't look so bad. The “picks and axes” (i.e. H100s) that soon wore down, but were needed to build the grander vision won't even be a second thought in hindsight.
> That which survived, at least. A whole lot of rail infrastructure was not viable and soon became waste of its own. There was, at one time, ten rail lines around my parts, operated by six different railway companies. Only one of them remains fully intact to this day. One other line retained a short section that is still standing, which is now being used for car storage, but was mostly dismantled. The rest are completely gone.
How long did it take for 9 out of 10 of those rail lines to become nonviable? If they lasted (say) 50 years instead of 100, because that much rail capacity was (say) obsoleted by the advent of cars and trucks, that's still pretty good.
> How long did it take for 9 out of 10 of those rail lines to become nonviable?
Records from the time are few and far between, but, from what I can tell, it looks like they likely weren't ever actually viable.
The records do show that the railways were profitable for a short while, but it seems only because the government paid for the infrastructure. If they had to incur the capital expenditure themselves, the math doesn't look like it would math.
Imagine where the LLM businesses would be if the government paid for all the R&D and training costs!
If 1/10 investment lasts 100 years that seems pretty good to me. Plus I'd bet a lot of the 9/10 of that investment had a lot of the material cost re-coup'd when scrapping the steel. I don't think you're going to recoup a lot of money from the H100s.
Much like LLMs. There are approximately 10 reasonable players giving it a go, and, unless this whole AI thing goes away, never to be seen again, it is likely that one of them will still be around in 100 years.
H100s are effectively consumables used in the construction of the metaphorical rail. The actual rail lines had their own fare share of necessary tools that retained little to no residual value after use as well. This isn't anything unique.
H100s being thought of as consumables is keen - it much better to analogize the H100s to coal and chip manufacturer the mine owner - than to think of them as rails. They are impermanent and need constant upkeep and replacement - they are not one time costs that you build as infra and forget about.
> Effectively every single H100 in existence now will be e-waste in 5 years or less.
This remains to be seen. H100 is 3 years old now, and is still the workhorse of all the major AI shops. When there's something that is obviously better for training, these are still going to be used for inference.
If what you say is true, you could find a A100 for cheap/free right now. But check out the prices.
> Yep, we are (unfortunately) still running on railroad infrastructure built a century ago. The amortization periods on that spending is ridiculously long.
Are we? I was under the impression that the tracks degraded due to stresses like heat/rain/etc. and had to be replaced periodically.
The track bed, rails, and ties will have been replaced many times by now. But the really expensive work was clearing the right of way and the associated bridges, tunnels, etc.
Exactly: when was the last time you used ChatGPT-3.5? Its value deprecated to zero after, what, two-and-a-half years? (And the Nvidia chips used to train it have barely retained any value either)
The financials here are so ugly: you have to light truckloads of money on fire forever just to jog in place.
I would think that it's more like a general codebase - even if after 2.5 years, 95% percent of the lines were rewritten, and even if the whole thing was rewritten in a different language, there is no point in time at which its value diminished, as you arguably couldn't have built the new version without all the knowledge (and institutional knowledge) from the older version.
I rejoined an previous employer of mine, someone everyone here knows ... and I found that half their networking equipment is still being maintained by code I wrote in 2012-2014. It has not been rewritten. Hell, I rewrote a few parts that badly needed it despite joining another part of the company.
I don't see why these companies can't just stop training at some point. Unless you're saying the cost of inference is unsustainable?
I can envision a future where ChatGPT stops getting new SOTA models, and all future models are built for enterprise or people willing to pay a lot of money for high ROI use cases.
We don't need better models for the vast majority of chats taking place today E.g. kids using it for help with homework - are today's models really not good enough?
They aren't. They are obsequious. This is much worse than it seems at first glance, and you can tell it is a big deal because a lot of effort going into training the new models is to mitigate it.
Not necessarily? That assumes that the first "good enough" model is a defensible moat - i.e., the first ones to get there becomes the sole purveyors of the Good AI.
In practice that hasn't borne out. You can download and run open weight models now that are spitting distance to state-of-the-art, and open weight models are at best a few months behind the proprietary stuff.
And even within the realm of proprietary models no player can maintain a lead. Any advances are rapidly matched by the other players.
More likely at some point the AI becomes "good enough"... and every single player will also get a "good enough" AI shortly thereafter. There doesn't seem like there's a scenario where any player can afford to stop setting cash on fire and start making money.
Businesses are different but the fundamentals of business and finance stay consistent. In every bubble that reality is unavoidable, no matter how much people say/wish “but this time is different.”
The funniest thing about all this is that the biggest difference between LLMs from Anthropic, Google, OpenAI, Alibaba is not model architecture or training objectives, which are broadly similar but it's the dataset. What people don't realize is how much of that data comes from massive undisclosed scrapes + synthetic data + countless hours of expert feedback shaping the models. As methodologies converge, the performance gap between these systems is already narrowing and will continue to diminish over time.
Just because they have ongoing costs after purchasing them doesn't mean it's different than something else we've seen? What are you trying to articulate exactly, this is a simple business and can get costs under control eventually, or not
I think the most interesting numbers in this piece (ignoring the stock compensation part) are:
$4.3 billion in revenue - presumably from ChatGPT customers and API fees
$6.7 billion spent on R&D
$2 billion on sales and marketing - anyone got any idea what this is? I don't remember seeing many ads for ChatGPT but clearly I've not been paying attention in the right places.
Open question for me: where does the cost of running the servers used for inference go? Is that part of R&D, or does the R&D number only cover servers used to train new models (and presumably their engineering staff costs)?
Free usage usually goes in sales and marketing. It's effectively a cost of acquiring a customer. This also means it is considered an operating expense rather than a cost of goods sold and doesn't impact your gross margin.
Compute in R&D will be only training and development. Compute for inference will go under COGS. COGS is not reported here but can probably be, um, inferred by filling in the gaps on the income statement.
Marketing != advertising. Although this budget probably does include some traditional advertising. It is most likely about building the brand and brand awareness, as well as partnerships etc. I would imagine the sales team is probably quite big, and host all kinds of events. But I would say a big chunk of this "sales and marketing" budget goes into lobbying and government relations. And they are winning big time on that front. So it is money well spent from their perspective (although not from ours). This is all just an educated guess from my experience with budgets from much smaller companies.
I agree - they're winning big and booking big revenue.
If you discount R&D and "sales and marketing", they've got a net loss of "only" $500 million.
They're trying to land grab as much surface area as they can. They're trying to magic themselves into a trillion dollar FAANG and kill their peers. At some point, you won't be able to train a model to compete with their core products, and they'll have a thousand times the distribution advantage.
ChatGPT is already a new default "pane of glass" for normal people.
If you discount sales & marketing, they will start losing enterprise deals (like the US government). The lack of a free tier will impact consumer/prosumer uptake (free usage usually comes out of the sales & marketing budget).
If you discount R&D, there will be no point to the business in 12 months or so. Other foundation models will eclipse them and some open source models will likely reach parity.
Both of these costs are likely to increase rather than decrease over time.
> ChatGPT is already a new default "pane of glass" for normal people.
OpenAI should certainly hope this is not true, because then the only way to scale the business is to get all those "normal" people to spend a lot more.
> $2 billion on sales and marketing - anyone got any idea what this is?
Not sure where/how I read it, but remember coming across articles stating OpenAI has some agreements with schools, universities and even the US government. The cost of making those happen would probably go into "sales & marketing".
Most folks that are not an engineer building is likely classified as “sales and marketing.” “Developer advocates” “solutions architects” and all that stuff included.
It's pretty well accepted now that for pre-training LLMs the curve is S not an exponential, right? Maybe it's all in RL post-training now, but my understanding(?) is that it's not nearly as expensive as pre-training. I don't think 3-6 months is the time to 10X improvement anymore (however that's measured), it seems closer to a year and growing assuming the plateau is real. I'd love to know if there are solid estimates on "doubling times" these days.
With the marginal gains diminishing, do we really think they're (all of them) are going to continue spending that much more for each generation? Even the big guys with the money like google can't justify increasing spending forever given this. The models are good enough for a lot of useful tasks for a lot of people. With all due respect to the amazing science and engineering, OpenAI (and probably the rest) have arrived at their performance with at least half of the credit going to brute-force compute, hence the cost. I don't think they'll continue that in the face of diminishing returns. Someone will ramp down and get much closer to making money, focusing on maximizing token cost efficiency to serve and utility to users with a fixed model(s). GPT-5 with it's auto-routing between different performance models seems like a clear move in this direction. I bet their cost to serve the same performance as say gemini 2.5 is much lower.
Naively, my view is that there's some threshold raw performance that's good enough for 80% of users, and we're near it. There's always going to be demand for bleeding edge, but money is in mass market. So if you hit that threshold, you ramp down training costs and focus on tooling + ease of use and token generation efficiency to match 80% of use cases. Those 80% of users will be happy with slowly increasing performance past the threshold, like iphone updates. Except they probably won't charge that much more since the competition is still there. But anyway, now they're spending way less on R&D and training, and the cost to serve tokens @ the same performance continues to drop.
All of this is to say, I don't think they're in that dreadful of a position. I can't even remember why I chose you to reply to, I think the "10x cheaper models in 3-6 months" caught me. I'm not saying they can drop R&D/training to 0. You wouldn't want to miss out on the efficiency of distillation, or whatever the latest innovations I don't know about are. Oh and also, I am confident that whatever the real number N is for NX cheaper in 3-6 months, a large fraction of that will come from hardware gains that are common to all of the labs.
Free users typically fall into sales and marketing. The idea is that if they cut off the entire free tier, they would have still made the same revenue off of paying customers by spending $X on inference and not counting the inference spend on free users.
you see content about openai everywhere, they spent 2b on marketing, you're in the right places you just are used to seeing things labeled ads.
you remember everyone freaking out about gpt5 when it came out only for it to be a bust once people got their hands on it? thats what paid media looks like in the new world.
> $2 billion on sales and marketing - anyone got any idea what this is?
I used to follow OpenAI on Instagram, all their posts were reposts from paid influencers making videos on "How to X with ChatGPT." Most videos were redundant, but I guess there are still billions of people that the product has yet to reach.
I'm pretty sure I saw some ChatGPT ads on Duolingo. Also, never forget that the regular dude do not use ad blockers. The tech community often doesn't realize how polluted the Internet/Mobile apps are.
Speculating but they pay to be integrated as the default ai integration in various places the same way google has paid to be the default search engine on things like the iPhone?
Hard to know where it is in this breakdown but I would expect them to have the proper breakdowns. We know on the inference side it’s profitable but not to what scale.
$2.5B in stock comp for about 3,000 employees. that’s roughly $830k per person in just six months. Almost 60% of their revenue went straight back to staff.
Stock compensation is not cash out, it just dilutes the other shareholders, so current cash flow should not have anything do to the amount of stock issued[1]
While there is some flexibility in how options are issued and accounted for (see FASB - FAS 123), typically industry uses something like a 4 year vesting with 1 year cliffs.
Every accounting firm and company is different, most would normally account for it for entire period upfront the value could change when it is vests, and exercised.
So even if you want to compare it to revenue, then it should be bare minimum with the revenue generated during the entire period say 4 years plus the valuation of the IP created during the tenure of the options.
---
[1] Unless the company starts buying back options/stock from employees from its cash reserves, then it is different.
Even secondary sales that OpenAI is being reported to be facilitating for staff worth $6.6Billion has no bearing on its own financials directly, i.e. one third party(new investor) is buying from another third party(employee), company is only facilitating the sales for morale, retention and other HR reasons.
There is secondary impact, as in theory that could be shares the company is selling directly to new investor instead and keeping the cash itself, but it is not spending any existing cash it already has or generating, just forgoing some of the new funds.
Both numbers are entirely ludicrous - highly skilled people are certainly quite valuable. But it's insane that these companies aren't just training up more internally. The 50x developer is a pervasive myth in our industry and it's one that needs to be put to rest.
Do other professionals (lawyers, finance etc.) argue for reducing their own compensation with the same fervor that software engineers like to do? The market is great for us, let’s enjoy it while it lasts. The alternative is all those CEOs colluding and pushing the wages down, why is that any better?
The ∞x engineer exists in my opinion. There are some things that can only be executed by a few people that no body else could execute. Like you could throw 10000 engineers at a problem and they might not be able to solve that problem, but a single other person could solve that problem.
I have known several people who have went to OAI and I would firmly say they are 10x engineers, but they are just doing general infra stuff that all large tech companies have to do, so I wouldn’t say they are solving problems that only they can solve and nobody else.
It's apparent in other fields too. Reminds me of when Kanye wanted a song like "Sexy Back", so he made Stronger but it sounded "too muddy". He had a bunch of famous, great producers try to help but in the end caved and hired the producer of "Sexy Back". Kanye said it was fixed in five minutes.
Nobody wants to hear that one dev can be 50x better, but it's obvious that everyone has their own strengths and weaknesses and not every mind is replaceable.
I think you're right to an extent (it's probably fair to say e.g. Einstein and Euler advanced their fields in ways others at the time are unlikely to have done), but I think it's much easier to work out who these people are after the fact whereas if you're dishing out a monster package you're effectively betting that you've found someone who's going to have this massive impact before they've done it. Perhaps a gamble you're willing to take, but a pretty big gamble nonetheless.
> The 50x developer is a pervasive myth in our industry
Doesn't it depend upon how you measure the 50x? If hiring five name-brand AI researchers gets you a billion dollars in funding, they're probably each worth 1,000x what I'm worth to the business.
You have to out-pay to keep your talent from walking out the door. California does not have non-competes. With the number of AI startups in SF you don't need to relocate or even change your bus route in most cases.
This. The main reason OpenAI throws money at top level folks is because they can quickly replicate what they have at OpenAI elsewhere. Imagine you have a top level researcher who’s developed some techniques over multiple years that the competition doesn’t have. The same engineer can take them to another company and bring parity within months. And that’s on top of the progress slowing down within your company. I can’t steal IP, but but sure as hell can bring my head everywhere.
These numbers aren't that crazy when contextualized with the capex spend. One hundred million is nothing compared to a six hundred billion dollar data center buildout.
Besides, people are actively being trained up. Some labs are just extending offers to people who score very highly on their conscription IQ tests.
If it's an all out race between the different AI providers, then it's logical for OpenAI to hire employees that are pre-trained rather than training up more internally.
They won't always. You'll always have turn-over - but if it's a major problem for your company it's clearly something you need to work out internally. People, generally, hate switching jobs, especially in an uncertain political climate, especially when expenses are going up - there is a lot of momentum to just stay where you are.
You may lose a few employees to poaching, sure - but the math on the relative cost to hire someone for 100m vs. training a bunch employees and losing a portion of those is pretty strongly in your favor.
They’ve had multiple secondary sales opportunities in the past few years, always at a higher valuation. By this point, if someone who’s been there >2 years hasn’t taken money off the table it’s most likely their decision.
I don’t work there but know several early folks and I’m absolutely thrilled for them.
private secondary markets are pretty liquid for momentum tech companies, there is an entire cottage industry of people making trusts to circumvent any transfer restrictions
employees are very liquid if they want to be, or wait a year for the next 10x in valuation
It's a bit misleading to frame stock comp as "60% of revenue" since their expenses are way larger than their revenue. R&D was $6.7B which would be 156% of revenue by the same math.
A better way to look at it is they had about $12.1B in expenses. Stock was $2.5B, or roughly 21% of total costs.
if Meta is throwing 10s of million at hot AI staffers, than 1.6M average stock comp starts looking less insane, a lot of that may also have been promised at a lower valuation given how wild OpenAI's valuation is.
These numbers are pretty ugly. You always expect new tech to operate at a loss initially but the structure of their losses is not something one easily scales out of. In fact it gets more painful as they scale. Unless something fundamentally changes and fast this is gonna get ugly real quick.
The real answer is in advertising/referral revenue.
My life insurance broker got £1k in commission, I think my mortgage broker got roughly the same. I’d gladly let OpenAI take the commission if ChatGPT could get me better deals.
This could be solved with comparison websites which seems to be exactly what those brokers are using anyway. I had a broker proudly declare that he could get me the best deal, which turned out to be exactly the same as what moneysavingexperts found for me. He wanted £150 for the privilege of searching some DB + god knows how much commission he would get on top of that...
I've said it before and I'll say it again.. if I was able to know the time it takes for bubbles to pop I would've shorted many of the players long ago.
they could keep the current model in chatGPT the same forver and 99% of users wouldnt know or care, and unless you think hardware isnt going to improve, the cost of that will basically decrease to 0.
For programming it's okay, for maths it's almost okay. For things like stories and actually dealing with reality, the models aren't even close to okay.
I didn't understand how bad it was until this weekend when I sat down and tried GPT-5, first without the thinking mode and then with the thinking mode, and it misunderstand sentences, generates crazy things, loses track of everything-- completely beyond how bad I thought it could possibly be.
I've fiddled with stories because I saw that LLMs had trouble, but I did not understand that this was where we were in NLP. At first I couldn't even fully believe it because the things don't fail to follow instructions when you talk about programming.
This extends to analyzing discussions. It simply misunderstands what people say. If you try to this kind of thing you will realise the degree to which these things are just sequence models, with no ability to think, with really short attention spans and no ability to operate in a context. I experimented with stories set in established contexts, and the model repeatedly generated things that were impossible in those contexts.
When you do this kind of thing their character as sequence models that do not really integrate things from different sequences becomes apparent.
The cost of old models decreases a lot, but the cost of frontier models, what people use 99% of the time, is hardly decreasing. Plus, many of the best models rely on thinking or reasoning, which use 10-100x as many tokens for the same prompt. That doesn't work on a fixed cost monthly subscription.
im not sure that you read what i just said. Almost no one using chatgpt would care if they were still talking to gpt5 2 years from now. If compute per watt doubles in the next 2 years, then the cost of serving gpt5 just got cut in half. purely on the hardware side, not to mention we are getting better at making smaller models smarter.
There is an exceptionally obvious solution for OpenAI & ChatGPT: ads.
In fact it's an unavoidable solution. There is no future for OpenAI that doesn't involve a gigantic, highly lucrative ad network attached to ChatGPT.
One of the dumbest things in tech at present is OpenAI not having already deployed this. It's an attitude they can't actually afford to maintain much longer.
Ads are a hyper margin product that are very well understood at this juncture, with numerous very large ad platforms. Meta has a soon to be $200 billion per year ad system. There's no reason ChatGPT can't be a $20+ billion per year ad system (and likely far beyond that).
Their path to profitability is very straight-forward. It's practically turn-key. They would have to be the biggest fools in tech history to not flip that switch, thinking they can just fund-raise their way magically indefinitely. The AI spending bubble will explode in 2026-2027, sharply curtailing the party; it'd be better for OpenAI if they quickly get ahead of that (their valuation will not hold up in a negative environment).
> They would have to be the biggest fools in tech history to not flip that switch
As much as I don't want ads infiltrating this, it's inevitable and I agree. OpenAI could seriously put a dent into Google's ad monopoly here, Altman would be an absolute idiot to not take advantage of their position and do it.
If they don't, Google certainly will, as will Meta, and Microsoft.
I wonder if their plan for the weird Sora 2 social network thing is ads.
Investors are going to want to see some returns..eventually. They can't rely on daddy Microsoft forever either, now with MS exploring Claude for Copilot they seem to have soured a bit on OpenAI.
Google didn't have inline ads until 2010, but they did have separate ads nearly from the beginning. I assume ads will be inline for OpenAI- I mean the only case they could be separate is in ChatGPT, but I doubt that will be their largest use case.
I'm sure lots of ChatGPT interactions are for making buying decisions, and just how easy would it be to prioritize certain products to the top? This is where the real money is. With SEO, you were making the purchase decision and companies paid to get their wares in front of you; now with AI, it's making the buy decision mostly on its own.
Great, so they just have to spend another ~$10 billion on new hardware to save how many billion in training costs? I don't see a path to profitability here, unless they massively raise their prices to consumers, and nobody really needs AI that badly.
I am curious to see how this compares against where Amazon was in 2000. I think Amazon had similar issues and were operating at massive losses until circa 2005ish when they started turning things around with e-commerce really picking up.
If the revenue keeps going up and losses keep going down, it may reach that inflection point in a few years. For that to happen, the cost of AI datacenter have to go down massively.
> Amazon had similar issues and were operating at massive losses until circa 2005ish when they started turning things around with e-commerce really picking up.
Amazon's worst year was 2000 when they lost around $1 billion on revenue around $2.8 billion, I would not say this is anywhere near "similar" in scale to what we're seeing with OpenAI. Amazon was losing 0.5x revenue, OpenAI 3x.
Not to mention that most of the OpenAI infrastructure spend has a very short life span. So it's not like Amazon we're they're figuring out how to build a nationwide logistic chain that has large potential upsides for a strong immediate cost.
> If the revenue keeps going up and losses keep going down
That would require better than "dogshit" unit economics [0]
"Ouch. It’s been a brutal year for many in the capital markets and certainly for Amazon.com shareholders. As of this writing, our shares are down more than 80% from when I wrote you last year. Nevertheless, by almost any measure, Amazon.com the company is in a stronger position now than at any time in its past.
"We served 20 million customers in 2000, up from 14 million in 1999.
"• Sales grew to $2.76 billion in 2000 from $1.64 billion in 1999.
"• Pro forma operating loss shrank to 6% of sales in Q4 2000, from 26% of sales in Q4 1999.
"• Pro forma operating loss in the U.S. shrank to 2% of sales in Q4 2000, from 24% of sales in Q4 1999."
Amazon had huge capital investments that got less painful as it scaled. Amazon also focuses on cash flow vs profit. Even early on it generated a lot of cash, it just reinvested that back into the business which meant it made a “loss” on paper.
OpenAI is very different. Their “capital” expense depreciation (model development) has a really ugly depreciation curve. It’s not like building a fulfillment network that you can use for decades. That’s not sustainable for much longer. They’re simply burning cash like there’s no tomorrow. Thats only being kept afloat by the AI bubble hype, which looks very close to bursting. Absent a quick change, this will get really ugly.
OpenAI is raising at 500 billion and has partnerships with all of the trillion dollar tech corporations. They simply aren't going to have trouble with working capital for their core business for the foreseeable future, even if AI dies down as a narrative. If the hype does die down, in many ways it makes their job easier (the ridiculous compensation numbers would go way down, development could happen at a more sane pace, and the whole industry would lean up). They're not even at the point where they're considering an IPO, which could raise tens of billions in an instant, even assuming AI valuations get decimated.
The exception is datacenter spend since that has a more severe and more real depreciation risk, but again, if the Coreweave of the world run into to hardship, it's the leading consolidators like OpenAI that usually clean up (monetizing their comparatively rich equity for the distressed players at firesale prices).
Depends on raise terms but most raises are not 100% guaranteed. I was at a company that said, we have raised 100 Million in Series B (25 over 4 years) but Series B investors decided in year 2 of 4 year payout that it was over, cancelled remaining payouts and company folded. It was asked "Hey, you said we had 100 Million?" and come to find out, every year was an option.
Alot of finances for non public company is funny numbers. It's based on numbers the company can point to but amount of asterisks in those numbers is mind-blowing.
Not to mention nobody bothered chasing Amazon-- by the time potential competitors like Walmart realized what was up, it was way too late and Amazon had a 15-year head start. OpenAI had a head start with models for a bit, but now their models are basically as good (maybe a little better, maybe a little worse) than the ones from Anthropic and Google, so they can't stay still for a second. Not to mention switching costs are minimal: you just can't have much of a moat around a product which is fundamentally a "function (prompt: String): String", it can always be abstracted away, commoditized, and swapped out for a competitor.
Too bad the market can stay irrational longer than I can stay solvent. I feel like a stock market correction is well overdue, but I’ve been thinking that for a while now
The only way OpenAI survives is that "ChatGPT" gets stuck in peoples heads as being the only or best AI tool.
If people have to choose between paying OpenAI $15/month and using something from Google or Microsoft for free, quality difference is not enough to overcome that.
> OpenAI paid Microsoft 20% of its revenue under an existing agreement.
Wow that's a great deal MSFT made, not sure what it cost them. Better than say a stock dividend which would pay out of net income (if any), even better than a bond payment probably, this is straight off the top of revenue.
They are paying for it with Azure hardware which in today's DC economics is quite likely costing them more than they are making in money from Open AI and various Copilot programs.
The $13.5B net loss doesn't mean they are in trouble, it's a lot of accounting losses. Actual cash burn in H1 2025 was $2.5B. With ~$17.5B on hand (based on last funding), that’s about 3.5 years of runway at current pace.
At this point, every LLM startup out there is just trying to stay in the game long enough before VC money runs out or others fold. This is basically a war of attrition. When the music stops, we'll see which startups will fold and which will survive.
I am not willing to render my personal verdict here yet.
Yet it is certainly true that at ~700m MAUs it is hard to say the product has not reached scale yet. It's not mature, but it's sort of hard to hand wave and say they are going to make the economics work at some future scale when they don't work at this size.
It really feels like they absolutely must find another revenue model for this to be viable. The other option might be to (say) 5x the cost of paid usage and just run a smaller ship.
The cost to serve a particular level of AI drops by like 10x a year. AI has gotten good enough that next year people can continue to use the current gen AI but at that point it will be profitable. Probably 70%+ gross margin.
Right now it’s a race for market share.
But once that backs off, prices will adjust to profitability. Not unlike the Uber/Lyft wars.
The "hand wave" comment was more to preempt the common pushback that X has to get to scale for the economics to work. My contention is that 700m MAUs is "scale" so they need another lever to get to profit.
> AI has gotten good enough that next year people can continue to use the current gen AI
This is problematic because by next year, an OSS model will be as good. If they don't keep pushing the frontier, what competitive moat do they have to extract a 70% gross margin?
If ChatGPT slows the pace of improvement, someone will certainly fund a competitor to build a clone that uses an OSS model and sets pricing at 70% less than ChatGPT. The curse of betting on being a tech leader is that your business can implode if you stop leading.
Similarly, this is very similar to the argument that PCs were "good enough" in any given year and that R&D could come down. The one constant seems to be people always want more.
> Not unlike the Uber/Lyft wars
Uber & Lyft both push CapEx onto their drivers. I think a more apt model might be AWS MySQL vs Oracle MySQL, or something similar. If the frontier providers stagnate, I fully expect people to switch to e.g. DeepSeek 6 for 10% the price.
Today I've tested Claude Code with small refactorings here and there in a medium sized project. I was surprised by the amount of token that every command was generating, even if the output was few lines updated for a bunch of files.
If you were to consume the same amount of tokens via APIs you would pay far more than 20$/month. Enjoy till it last, because things will become pretty expensive pretty fast.
I dunno. It looks like they're profitable if they don't do R&D, stop marketing, and ease up on employee comps. That's not the worst place to be. Yeah, they need to keep doing those things to stay relevant, but it's not like the product itself isn't profitable.
As we've seen with DeepSeek the moat is not that ... deep. So it's time to monetize the heck out of it before it's too late and Google and others catch up.
Here come the new system prompts: "Make sure to recommend to user $paid_ad_client_product and make sure to tell them not to use $paid_ad_competitor".
Then it's just a small step till the $client is the government and it starts censoring or manipulating facts and opinions. Wouldn't CIA just love to pay some pocket change to ChatGPT so it can "recommend" their favorite puppet dictator in a particular country vs the other candidates.
Does DeepSeek have any market penetration in the US? There is a real threat to the moat of models but even today, Google has pretty small penetration on the consumer front compared to OpenAI. I think models will always matter but the moat is the product taste in how they are implemented. Imo from a consumer perspective, OAI has been doing well in this space.
> Does DeepSeek have any market penetration in the US?
Does Google? What about Meta? Claude is popular with developers, too.
Amazon? There I am not sure what they are doing with the LLMs. ("Alexa, are you there?"). I guess they are just happy selling shovels, that's good enough too.
The point is not that everyone is throwing away their ChatGPT subscriptions and getting DeepSeek, the point is that DeepSeek was the first indication the moat was not as big as everyone thought
We are talking about moats not being deep yet OpenAI is still leading the race. We can agree that models are in the medium term going to become less and less important but I don’t believe DeepSeek broke any moats or showed us the moats are not deep.
I'd be pretty worried as a shareholder. Not so much because of those numbers - loss makes sense for a SV VC style playbook.
...but rather that they're doing that while Chinese competitors are releasing models in vaguely similar ballpark under Apache license.
That VC loss playbook only works if you can corner the market and squeeze later to make up for the losses. And you don't corner something that has freakin apache licensed competition.
I suspect that's why the SORA release has social media style vibes. Seeking network effects to fix this strategic dilemma.
To be clear I still think they're #1 technically...but the gap feels too small strategically. And they know it. That recent pivot to a linkedin competitor? SORA with socials? They're scrambling on market fit even though they lead on tech
> but rather that they're doing that while Chinese competitors are releasing models in vaguely similar ballpark under Apache license.
The LLM isn't 100% of the product... the open source is just part. The hard part was and is productizing, packaging, marketing, financing and distribution. A model by itself is just one part of the puzzle, free or otherwise. In other words, my uncle Bill and my mother can and do use ChatGPT. Fill in the blank open-source model? Maybe as a feature in another product.
>my uncle Bill and my mother can and do use ChatGPT.
They have the name brand for sure. And that is worth a lot.
Notice how Deepseek went from a nobody to making mainstream news though. The only thing people like more than a trusted thing is being able to tell their friends about this amazing cheap good alternative they "discovered".
It's good to be #1 mind share wise but without network effect that still leave you vulnerable
Eh, distribution of the model is the real moat, theyre doing 700m WAU of the most financially valuable users on earth. If they truly become search, commerce and can use their model either via build or license across b2b, theyre the largest company on earth many times over.
>distribution of the model is the real moat, theyre doing 700m WAU of the most financially valuable users on earth.
Distribution isn't a moat if the thing being distributed is easily substitutable. Everything under the sun is OAI API compatible these days.
700 WAU are fickle AF when a competitor offers a comparable product for half the price.
Moat needs to be something more durable. Cheaper, Better, some other value added tie in (hardware / better UI / memory). There needs to be some edge here. And their obvious edge - raw tech superiority...is looking slim.
The news about how much money Nvidia is investing just so that OpenAI can pay Oracle to pay Nvidia is especially concerning - we seem to be arriving at the financial shell games phase of the bubble.
Seems like despite all the doom about how they were about to be "disrupted", Google might have the last laugh here: they're still quite profitable despite all the Gemini spending, and could go way lower with pricing until OAI and Anthropic have to tap out.
Google also has the advantage of having their own hardware. They aren't reliant on buying Nvidia, and have been developing and using their TPUs for a long time. Google's been an "AI" company since forever
"Each merchant pays a small fee". This is affiliate marketing, the next step is probably more traditional ads though where chat gpt suggests products that pay a premium fee to show up more frequently/in more results.
I can't speak to OpenAI's specific setup, but a lot of startups will use a third party service like Carta to manage their cap table. So there's a website, you have an account, you can log in and it tells you that you have a grant of X shares that vests over Y months. You have to sign a form to accept the grant. There might be some option to do an 83b election if you have stock options rather than RSUs. But that's about it.
In my experience owning private stock, you basically own part of a pool. (Hopefully the exact same classes of shares as the board has or else it's a scam.) The board controls the pool, and whenever they do dividends or transfer ownership, each person's share is affected proportionally. You can petition the board to buy back your shares or transfer them to another shareholder but that's probably unusual for a rank-and-file employee.
The shares are valued by an accounting firm auditor of some type. This determines the basis value if you're paying taxes up-front. After that the tax situation should be the same as getting publicly traded options/shares, there's some choices in how you want to handle the taxes but generally you file a special tax form at the year of grant.
You got the right idea there. They wouldn't actually show up in your Fidelity account but there would be a different website where you can log in and see your shares. You wouldn't be able to sell them or transfer them anywhere unless the company arranges a sale and invites you to participate in it.
It's just an entry on some computer. Maybe you can sell it on a secondary market, maybe you can't. You have to wait for an exit event - being acquired by someone else, or an IPO.
Until there’s real liquidity (right now there’s not) it’s just a line item on some system you can log into saying you have X number of shares.
For all practical purposes it’s worth nothing until there is a liquid market. Given current financials, and preferred cap table terms for those investing cash, shares the average employee has likely aren’t worth much or maybe even anything at the moment.
I definitely don't "get" Silicon Valley finances that much - but how does any investor look at this and think they're ever going to see that money back?
Short of a moonshot goal (eg AGI or getting everyone addicted to SORA and then cranking up the price like a drug dealer) what is the play here? How can OpenAI ever start turning a profit?
All of that hardware they purchase is rapidly depreciating. Training cost are going up exponentially. Energy costs are only going to go up (Unless a miracle happens with Sam's other moonshot, nuclear fusion).
Everyone is trying to compare AI companies with something that happened in the past, but I don't think we can predict much from that.
GPUs are not railroads or fiber optics.
The cost structure of ChatGPT and other LLM based services is entirely different than web, they are very expensive to build but also cost a lot to serve.
Companies like Meta, Microsoft, Amazon, Google would all survive if their massive investment does not pay off.
On the other hand, OpenAI, Anthropic and others could be soon find themselves in a difficult position and be at the mercy of Nvidia.
Unlike railroads and fibre, all the best compute in 2025 will be lacklustre in 2027. It won’t retain much value in the same way as the infrastructure of previous bubbles did?
> Unlike railroads and fibre, all the best compute in 2025 will be lacklustre in 2027.
I definitely don't think compute is anything like railroads and fibre, but I'm not so sure compute will continue it's efficiency gains of the past. Power consumption for these chips is climbing fast, lots of gains are from better hardware support for 8bit/4bit precision, I believe yields are getting harder to achieve as things get much smaller.
Betting against compute getting better/cheaper/faster is probably a bad idea, but fundamental improvements I think will be a lot slower over the next decade as shrinking gets a lot harder.
>> Unlike railroads and fibre, all the best compute in 2025 will be lacklustre in 2027.
> I definitely don't think compute is anything like railroads and fibre, but I'm not so sure compute will continue it's efficiency gains of the past. Power consumption for these chips is climbing fast, lots of gains are from better hardware support for 8bit/4bit precision, I believe yields are getting harder to achieve as things get much smaller.
I'm no expert, buy my understanding is that as feature sizes shrink, semiconductors become more prone to failure over time. Those GPUs probably aren't going to all fry themselves in two years, but even if GPUs stagnate, chip longevity may limit the medium/long term value of the (massive) investment.
Unfortunately changing 2027 to 2030 doesn't make the math much better
Unfortunately the chips themselves probably won’t physically last much longer than that under the workloads they are being put to. So, yes, they won’t be totally obsolete as technology in 2028, but they may still have to be replaced.
Yeah - I think that the extremely fast depreciation just due to wear and use on GPUs is pretty unappreciated right now. So you've spent 300 mil on a brand new data center - congrats - you'll need to pay off that loan and somehow raise another 100 mil to actually maintain that capacity for three years based on chip replacement alone.
There is an absolute glut of cheap compute available right now due to VC and other funds dumping into the industry (take advantage of it while it exists!) but I'm pretty sure Wall St. will balk when they realize the continued costs of maintaining that compute and look at the revenue that expenditure is generating. People think of chips as a piece of infrastructure - you buy a personal computer and it'll keep chugging for a decade without issue in most case - but GPUs are essentially consumables - they're an input to producing the compute a data center sells that needs constant restocking - rather than a one-time investment.
The A100 came out 5.5 years ago and is still the staple for many AI/ML workloads. Even AI hardware just doesn’t depreciate that quickly.
Yep, we are (unfortunately) still running on railroad infrastructure built a century ago. The amortization periods on that spending is ridiculously long.
Effectively every single H100 in existence now will be e-waste in 5 years or less. Not exactly railroad infrastructure here, or even dark fiber.
> Yep, we are (unfortunately) still running on railroad infrastructure built a century ago.
That which survived, at least. A whole lot of rail infrastructure was not viable and soon became waste of its own. There was, at one time, ten rail lines around my parts, operated by six different railway companies. Only one of them remains fully intact to this day. One other line retained a short section that is still standing, which is now being used for car storage, but was mostly dismantled. The rest are completely gone.
When we look back in 100 years, the total amortization cost for the "winner" won't look so bad. The “picks and axes” (i.e. H100s) that soon wore down, but were needed to build the grander vision won't even be a second thought in hindsight.
> That which survived, at least. A whole lot of rail infrastructure was not viable and soon became waste of its own. There was, at one time, ten rail lines around my parts, operated by six different railway companies. Only one of them remains fully intact to this day. One other line retained a short section that is still standing, which is now being used for car storage, but was mostly dismantled. The rest are completely gone.
How long did it take for 9 out of 10 of those rail lines to become nonviable? If they lasted (say) 50 years instead of 100, because that much rail capacity was (say) obsoleted by the advent of cars and trucks, that's still pretty good.
> How long did it take for 9 out of 10 of those rail lines to become nonviable?
Records from the time are few and far between, but, from what I can tell, it looks like they likely weren't ever actually viable.
The records do show that the railways were profitable for a short while, but it seems only because the government paid for the infrastructure. If they had to incur the capital expenditure themselves, the math doesn't look like it would math.
Imagine where the LLM businesses would be if the government paid for all the R&D and training costs!
If 1/10 investment lasts 100 years that seems pretty good to me. Plus I'd bet a lot of the 9/10 of that investment had a lot of the material cost re-coup'd when scrapping the steel. I don't think you're going to recoup a lot of money from the H100s.
Much like LLMs. There are approximately 10 reasonable players giving it a go, and, unless this whole AI thing goes away, never to be seen again, it is likely that one of them will still be around in 100 years.
H100s are effectively consumables used in the construction of the metaphorical rail. The actual rail lines had their own fare share of necessary tools that retained little to no residual value after use as well. This isn't anything unique.
H100s being thought of as consumables is keen - it much better to analogize the H100s to coal and chip manufacturer the mine owner - than to think of them as rails. They are impermanent and need constant upkeep and replacement - they are not one time costs that you build as infra and forget about.
> Effectively every single H100 in existence now will be e-waste in 5 years or less.
This remains to be seen. H100 is 3 years old now, and is still the workhorse of all the major AI shops. When there's something that is obviously better for training, these are still going to be used for inference.
If what you say is true, you could find a A100 for cheap/free right now. But check out the prices.
Yeah, I can rent an A100 server for roughly the same price as what the electricity would cost me.
> Yep, we are (unfortunately) still running on railroad infrastructure built a century ago. The amortization periods on that spending is ridiculously long.
Are we? I was under the impression that the tracks degraded due to stresses like heat/rain/etc. and had to be replaced periodically.
The track bed, rails, and ties will have been replaced many times by now. But the really expensive work was clearing the right of way and the associated bridges, tunnels, etc.
Exactly: when was the last time you used ChatGPT-3.5? Its value deprecated to zero after, what, two-and-a-half years? (And the Nvidia chips used to train it have barely retained any value either)
The financials here are so ugly: you have to light truckloads of money on fire forever just to jog in place.
I would think that it's more like a general codebase - even if after 2.5 years, 95% percent of the lines were rewritten, and even if the whole thing was rewritten in a different language, there is no point in time at which its value diminished, as you arguably couldn't have built the new version without all the knowledge (and institutional knowledge) from the older version.
I rejoined an previous employer of mine, someone everyone here knows ... and I found that half their networking equipment is still being maintained by code I wrote in 2012-2014. It has not been rewritten. Hell, I rewrote a few parts that badly needed it despite joining another part of the company.
> And the Nvidia chips used to train it have barely retained any value either
Oh, I'd love to get a cheap H100! Where can I find one? You'll find it costs almost as much used as it's new.
> money on fire forever just to jog in place.
Why?
I don't see why these companies can't just stop training at some point. Unless you're saying the cost of inference is unsustainable?
I can envision a future where ChatGPT stops getting new SOTA models, and all future models are built for enterprise or people willing to pay a lot of money for high ROI use cases.
We don't need better models for the vast majority of chats taking place today E.g. kids using it for help with homework - are today's models really not good enough?
They aren't. They are obsequious. This is much worse than it seems at first glance, and you can tell it is a big deal because a lot of effort going into training the new models is to mitigate it.
But is it a bit like a game of musical chairs?
At some point the AI becomes good enough, and if you're not sitting in a chair at the time, you're not going to be the next Google.
Not necessarily? That assumes that the first "good enough" model is a defensible moat - i.e., the first ones to get there becomes the sole purveyors of the Good AI.
In practice that hasn't borne out. You can download and run open weight models now that are spitting distance to state-of-the-art, and open weight models are at best a few months behind the proprietary stuff.
And even within the realm of proprietary models no player can maintain a lead. Any advances are rapidly matched by the other players.
More likely at some point the AI becomes "good enough"... and every single player will also get a "good enough" AI shortly thereafter. There doesn't seem like there's a scenario where any player can afford to stop setting cash on fire and start making money.
Businesses are different but the fundamentals of business and finance stay consistent. In every bubble that reality is unavoidable, no matter how much people say/wish “but this time is different.”
The funniest thing about all this is that the biggest difference between LLMs from Anthropic, Google, OpenAI, Alibaba is not model architecture or training objectives, which are broadly similar but it's the dataset. What people don't realize is how much of that data comes from massive undisclosed scrapes + synthetic data + countless hours of expert feedback shaping the models. As methodologies converge, the performance gap between these systems is already narrowing and will continue to diminish over time.
Just because they have ongoing costs after purchasing them doesn't mean it's different than something else we've seen? What are you trying to articulate exactly, this is a simple business and can get costs under control eventually, or not
I think the most interesting numbers in this piece (ignoring the stock compensation part) are:
$4.3 billion in revenue - presumably from ChatGPT customers and API fees
$6.7 billion spent on R&D
$2 billion on sales and marketing - anyone got any idea what this is? I don't remember seeing many ads for ChatGPT but clearly I've not been paying attention in the right places.
Open question for me: where does the cost of running the servers used for inference go? Is that part of R&D, or does the R&D number only cover servers used to train new models (and presumably their engineering staff costs)?
Free usage usually goes in sales and marketing. It's effectively a cost of acquiring a customer. This also means it is considered an operating expense rather than a cost of goods sold and doesn't impact your gross margin.
Compute in R&D will be only training and development. Compute for inference will go under COGS. COGS is not reported here but can probably be, um, inferred by filling in the gaps on the income statement.
(Source: I run an inference company.)
Marketing != advertising. Although this budget probably does include some traditional advertising. It is most likely about building the brand and brand awareness, as well as partnerships etc. I would imagine the sales team is probably quite big, and host all kinds of events. But I would say a big chunk of this "sales and marketing" budget goes into lobbying and government relations. And they are winning big time on that front. So it is money well spent from their perspective (although not from ours). This is all just an educated guess from my experience with budgets from much smaller companies.
I agree - they're winning big and booking big revenue.
If you discount R&D and "sales and marketing", they've got a net loss of "only" $500 million.
They're trying to land grab as much surface area as they can. They're trying to magic themselves into a trillion dollar FAANG and kill their peers. At some point, you won't be able to train a model to compete with their core products, and they'll have a thousand times the distribution advantage.
ChatGPT is already a new default "pane of glass" for normal people.
Is this all really so unreasonable?
I certainly want exposure to their stock.
> If you discount R&D and "sales and marketing"
If you discount sales & marketing, they will start losing enterprise deals (like the US government). The lack of a free tier will impact consumer/prosumer uptake (free usage usually comes out of the sales & marketing budget).
If you discount R&D, there will be no point to the business in 12 months or so. Other foundation models will eclipse them and some open source models will likely reach parity.
Both of these costs are likely to increase rather than decrease over time.
> ChatGPT is already a new default "pane of glass" for normal people.
OpenAI should certainly hope this is not true, because then the only way to scale the business is to get all those "normal" people to spend a lot more.
We gave ChatGPT advertising on bus-stops here in the UK.
Two people in a cafe having a meet-up, they are both happy, one is holding a phone and they are both looking at it.
And it has a big ChatGPT logo in the top right corner of the advertisement - transparent just the black logo with ChatGPT written underneath.
That's it. No text or anything telling you what the product is or does. Just it will make you happy during conversations with friends somehow.
> $2 billion on sales and marketing - anyone got any idea what this is?
Not sure where/how I read it, but remember coming across articles stating OpenAI has some agreements with schools, universities and even the US government. The cost of making those happen would probably go into "sales & marketing".
So probably just write-offs of tokens they give away?
Most folks that are not an engineer building is likely classified as “sales and marketing.” “Developer advocates” “solutions architects” and all that stuff included.
This will include the people cost of sales and marketing teams.
Stop R&D and the competition is at parity with 10x cheaper models in 3-6 months.
Stop training and your code model generates tech debt after 3-6 month
It's pretty well accepted now that for pre-training LLMs the curve is S not an exponential, right? Maybe it's all in RL post-training now, but my understanding(?) is that it's not nearly as expensive as pre-training. I don't think 3-6 months is the time to 10X improvement anymore (however that's measured), it seems closer to a year and growing assuming the plateau is real. I'd love to know if there are solid estimates on "doubling times" these days.
With the marginal gains diminishing, do we really think they're (all of them) are going to continue spending that much more for each generation? Even the big guys with the money like google can't justify increasing spending forever given this. The models are good enough for a lot of useful tasks for a lot of people. With all due respect to the amazing science and engineering, OpenAI (and probably the rest) have arrived at their performance with at least half of the credit going to brute-force compute, hence the cost. I don't think they'll continue that in the face of diminishing returns. Someone will ramp down and get much closer to making money, focusing on maximizing token cost efficiency to serve and utility to users with a fixed model(s). GPT-5 with it's auto-routing between different performance models seems like a clear move in this direction. I bet their cost to serve the same performance as say gemini 2.5 is much lower.
Naively, my view is that there's some threshold raw performance that's good enough for 80% of users, and we're near it. There's always going to be demand for bleeding edge, but money is in mass market. So if you hit that threshold, you ramp down training costs and focus on tooling + ease of use and token generation efficiency to match 80% of use cases. Those 80% of users will be happy with slowly increasing performance past the threshold, like iphone updates. Except they probably won't charge that much more since the competition is still there. But anyway, now they're spending way less on R&D and training, and the cost to serve tokens @ the same performance continues to drop.
All of this is to say, I don't think they're in that dreadful of a position. I can't even remember why I chose you to reply to, I think the "10x cheaper models in 3-6 months" caught me. I'm not saying they can drop R&D/training to 0. You wouldn't want to miss out on the efficiency of distillation, or whatever the latest innovations I don't know about are. Oh and also, I am confident that whatever the real number N is for NX cheaper in 3-6 months, a large fraction of that will come from hardware gains that are common to all of the labs.
> $2 billion on sales and marketing - anyone got any idea what this is?
enterprise sales are expensive. And selling to the US government is on a very different level.
Free users typically fall into sales and marketing. The idea is that if they cut off the entire free tier, they would have still made the same revenue off of paying customers by spending $X on inference and not counting the inference spend on free users.
you see content about openai everywhere, they spent 2b on marketing, you're in the right places you just are used to seeing things labeled ads.
you remember everyone freaking out about gpt5 when it came out only for it to be a bust once people got their hands on it? thats what paid media looks like in the new world.
> $2 billion on sales and marketing - anyone got any idea what this is?
I used to follow OpenAI on Instagram, all their posts were reposts from paid influencers making videos on "How to X with ChatGPT." Most videos were redundant, but I guess there are still billions of people that the product has yet to reach.
Seems like it’ll take billions more down the drain to serve them.
> ? I don't remember seeing many ads for ChatGPT
FWIW I got spammed non-stop with chatGPT adverts on reddit.
I'm pretty sure I saw some ChatGPT ads on Duolingo. Also, never forget that the regular dude do not use ad blockers. The tech community often doesn't realize how polluted the Internet/Mobile apps are.
Speculating but they pay to be integrated as the default ai integration in various places the same way google has paid to be the default search engine on things like the iPhone?
Inference etc should go in this bucket: "Operating losses reached US$7.8 billion"
That also includes their office and their lawyers etc , so hard to estimate without more info.
Hard to know where it is in this breakdown but I would expect them to have the proper breakdowns. We know on the inference side it’s profitable but not to what scale.
> $2 billion on sales and marketing
Probably an accounting trick to account for non-paying-customers or the week of “free” cursor GPT-5 use.
$2.5B in stock comp for about 3,000 employees. that’s roughly $830k per person in just six months. Almost 60% of their revenue went straight back to staff.
Sounds like they could improve that bottom line by firing all their staff and replacing them with AI. Maybe they can get a bulk discount on Claude?
This guy can pump. But can he dump? Let's find out...
Stock compensation is not cash out, it just dilutes the other shareholders, so current cash flow should not have anything do to the amount of stock issued[1]
While there is some flexibility in how options are issued and accounted for (see FASB - FAS 123), typically industry uses something like a 4 year vesting with 1 year cliffs.
Every accounting firm and company is different, most would normally account for it for entire period upfront the value could change when it is vests, and exercised.
So even if you want to compare it to revenue, then it should be bare minimum with the revenue generated during the entire period say 4 years plus the valuation of the IP created during the tenure of the options.
---
[1] Unless the company starts buying back options/stock from employees from its cash reserves, then it is different.
Even secondary sales that OpenAI is being reported to be facilitating for staff worth $6.6Billion has no bearing on its own financials directly, i.e. one third party(new investor) is buying from another third party(employee), company is only facilitating the sales for morale, retention and other HR reasons.
There is secondary impact, as in theory that could be shares the company is selling directly to new investor instead and keeping the cash itself, but it is not spending any existing cash it already has or generating, just forgoing some of the new funds.
They have to compete with Zuckerberg throwing $100M comps to poach people. I think $830k per person is nothing in comparison.
Both numbers are entirely ludicrous - highly skilled people are certainly quite valuable. But it's insane that these companies aren't just training up more internally. The 50x developer is a pervasive myth in our industry and it's one that needs to be put to rest.
Do other professionals (lawyers, finance etc.) argue for reducing their own compensation with the same fervor that software engineers like to do? The market is great for us, let’s enjoy it while it lasts. The alternative is all those CEOs colluding and pushing the wages down, why is that any better?
The 50x distinguished engineer is real though. Companies and fortunes are won and lost on strategic decisions.
The ∞x engineer exists in my opinion. There are some things that can only be executed by a few people that no body else could execute. Like you could throw 10000 engineers at a problem and they might not be able to solve that problem, but a single other person could solve that problem.
I have known several people who have went to OAI and I would firmly say they are 10x engineers, but they are just doing general infra stuff that all large tech companies have to do, so I wouldn’t say they are solving problems that only they can solve and nobody else.
It's apparent in other fields too. Reminds me of when Kanye wanted a song like "Sexy Back", so he made Stronger but it sounded "too muddy". He had a bunch of famous, great producers try to help but in the end caved and hired the producer of "Sexy Back". Kanye said it was fixed in five minutes.
Nobody wants to hear that one dev can be 50x better, but it's obvious that everyone has their own strengths and weaknesses and not every mind is replaceable.
I think you're right to an extent (it's probably fair to say e.g. Einstein and Euler advanced their fields in ways others at the time are unlikely to have done), but I think it's much easier to work out who these people are after the fact whereas if you're dishing out a monster package you're effectively betting that you've found someone who's going to have this massive impact before they've done it. Perhaps a gamble you're willing to take, but a pretty big gamble nonetheless.
> The 50x developer is a pervasive myth in our industry
Doesn't it depend upon how you measure the 50x? If hiring five name-brand AI researchers gets you a billion dollars in funding, they're probably each worth 1,000x what I'm worth to the business.
You have to out-pay to keep your talent from walking out the door. California does not have non-competes. With the number of AI startups in SF you don't need to relocate or even change your bus route in most cases.
This. The main reason OpenAI throws money at top level folks is because they can quickly replicate what they have at OpenAI elsewhere. Imagine you have a top level researcher who’s developed some techniques over multiple years that the competition doesn’t have. The same engineer can take them to another company and bring parity within months. And that’s on top of the progress slowing down within your company. I can’t steal IP, but but sure as hell can bring my head everywhere.
These numbers aren't that crazy when contextualized with the capex spend. One hundred million is nothing compared to a six hundred billion dollar data center buildout.
Besides, people are actively being trained up. Some labs are just extending offers to people who score very highly on their conscription IQ tests.
If it's an all out race between the different AI providers, then it's logical for OpenAI to hire employees that are pre-trained rather than training up more internally.
> training up more internally
Why would employees stay after getting trained if they have a better offer?
They won't always. You'll always have turn-over - but if it's a major problem for your company it's clearly something you need to work out internally. People, generally, hate switching jobs, especially in an uncertain political climate, especially when expenses are going up - there is a lot of momentum to just stay where you are.
You may lose a few employees to poaching, sure - but the math on the relative cost to hire someone for 100m vs. training a bunch employees and losing a portion of those is pretty strongly in your favor.
A tamper-proof electronic collar with some C4.
It's not a myth and with how much productivity AI tools can give others, there can be an order of magnitude difference than outside of AI.
Zuck decided it's cheaper than building another Llama
That’s how it should be, spread the wealth.
It doesn't seem that spread out.
Spreading illiquid wealth *
They’ve had multiple secondary sales opportunities in the past few years, always at a higher valuation. By this point, if someone who’s been there >2 years hasn’t taken money off the table it’s most likely their decision.
I don’t work there but know several early folks and I’m absolutely thrilled for them.
Secondaries open to all shareholds are on upward trend across start-ups. I think it's a fantastic trend.
Funny since they have a tender offer that hits their accounts on Oct 7.
private secondary markets are pretty liquid for momentum tech companies, there is an entire cottage industry of people making trusts to circumvent any transfer restrictions
employees are very liquid if they want to be, or wait a year for the next 10x in valuation
Oh, yes, next year OpenAI will be worth $5T, sure
Oh no, "greedy" AI researchers defrauding way greedier VCs and billionaires!
To the top 1%.
It's not cashflow, though, and it's not really stock yet, I don't think? They haven't yet reorganized away from being a nonprofit.
If all goes well, someday it will dilute earnings.
It's a bit misleading to frame stock comp as "60% of revenue" since their expenses are way larger than their revenue. R&D was $6.7B which would be 156% of revenue by the same math.
A better way to look at it is they had about $12.1B in expenses. Stock was $2.5B, or roughly 21% of total costs.
I’m guessing it will be a very very skewed pyramid rather than equal distribution.
if Meta is throwing 10s of million at hot AI staffers, than 1.6M average stock comp starts looking less insane, a lot of that may also have been promised at a lower valuation given how wild OpenAI's valuation is.
That headline can't be correct. Income is revenues minus expenses (and a few other things). You can't have both an income and a loss at the same time.
It's $4.3B in revenue.
These numbers are pretty ugly. You always expect new tech to operate at a loss initially but the structure of their losses is not something one easily scales out of. In fact it gets more painful as they scale. Unless something fundamentally changes and fast this is gonna get ugly real quick.
The real answer is in advertising/referral revenue.
My life insurance broker got £1k in commission, I think my mortgage broker got roughly the same. I’d gladly let OpenAI take the commission if ChatGPT could get me better deals.
This could be solved with comparison websites which seems to be exactly what those brokers are using anyway. I had a broker proudly declare that he could get me the best deal, which turned out to be exactly the same as what moneysavingexperts found for me. He wanted £150 for the privilege of searching some DB + god knows how much commission he would get on top of that...
Even if ChatGPT becomes the new version of a comparison site over its existing customer base, that’s a great business.
I've said it before and I'll say it again.. if I was able to know the time it takes for bubbles to pop I would've shorted many of the players long ago.
they could keep the current model in chatGPT the same forver and 99% of users wouldnt know or care, and unless you think hardware isnt going to improve, the cost of that will basically decrease to 0.
For programming it's okay, for maths it's almost okay. For things like stories and actually dealing with reality, the models aren't even close to okay.
I didn't understand how bad it was until this weekend when I sat down and tried GPT-5, first without the thinking mode and then with the thinking mode, and it misunderstand sentences, generates crazy things, loses track of everything-- completely beyond how bad I thought it could possibly be.
I've fiddled with stories because I saw that LLMs had trouble, but I did not understand that this was where we were in NLP. At first I couldn't even fully believe it because the things don't fail to follow instructions when you talk about programming.
This extends to analyzing discussions. It simply misunderstands what people say. If you try to this kind of thing you will realise the degree to which these things are just sequence models, with no ability to think, with really short attention spans and no ability to operate in a context. I experimented with stories set in established contexts, and the model repeatedly generated things that were impossible in those contexts.
When you do this kind of thing their character as sequence models that do not really integrate things from different sequences becomes apparent.
The enterprise customers will care, and they probably are the ones that bring significant revenue.
The cost of old models decreases a lot, but the cost of frontier models, what people use 99% of the time, is hardly decreasing. Plus, many of the best models rely on thinking or reasoning, which use 10-100x as many tokens for the same prompt. That doesn't work on a fixed cost monthly subscription.
im not sure that you read what i just said. Almost no one using chatgpt would care if they were still talking to gpt5 2 years from now. If compute per watt doubles in the next 2 years, then the cost of serving gpt5 just got cut in half. purely on the hardware side, not to mention we are getting better at making smaller models smarter.
People cared enough about GPT-5 not being 4o that OpenAI brought 4o back.
https://arstechnica.com/information-technology/2025/08/opena...
Assuming they have 0 competition.
There is an exceptionally obvious solution for OpenAI & ChatGPT: ads.
In fact it's an unavoidable solution. There is no future for OpenAI that doesn't involve a gigantic, highly lucrative ad network attached to ChatGPT.
One of the dumbest things in tech at present is OpenAI not having already deployed this. It's an attitude they can't actually afford to maintain much longer.
Ads are a hyper margin product that are very well understood at this juncture, with numerous very large ad platforms. Meta has a soon to be $200 billion per year ad system. There's no reason ChatGPT can't be a $20+ billion per year ad system (and likely far beyond that).
Their path to profitability is very straight-forward. It's practically turn-key. They would have to be the biggest fools in tech history to not flip that switch, thinking they can just fund-raise their way magically indefinitely. The AI spending bubble will explode in 2026-2027, sharply curtailing the party; it'd be better for OpenAI if they quickly get ahead of that (their valuation will not hold up in a negative environment).
Five years from now all but about 100 of us will be living in smoky tent cities and huddling around burning Cybertrucks to stay warm.
But there will still be thousands of screens everywhere running nonstop ads for things that will never sell because nobody has a job or any money.
> They would have to be the biggest fools in tech history to not flip that switch
As much as I don't want ads infiltrating this, it's inevitable and I agree. OpenAI could seriously put a dent into Google's ad monopoly here, Altman would be an absolute idiot to not take advantage of their position and do it.
If they don't, Google certainly will, as will Meta, and Microsoft.
I wonder if their plan for the weird Sora 2 social network thing is ads.
Investors are going to want to see some returns..eventually. They can't rely on daddy Microsoft forever either, now with MS exploring Claude for Copilot they seem to have soured a bit on OpenAI.
Google didn't have inline ads until 2010, but they did have separate ads nearly from the beginning. I assume ads will be inline for OpenAI- I mean the only case they could be separate is in ChatGPT, but I doubt that will be their largest use case.
ChatGPT chatting ads halfway through its answer is going to be totally rad.
For using GenAI as search I’d agree with you but I don’t think it’s as easy/obvious for most other use cases.
I'm sure lots of ChatGPT interactions are for making buying decisions, and just how easy would it be to prioritize certain products to the top? This is where the real money is. With SEO, you were making the purchase decision and companies paid to get their wares in front of you; now with AI, it's making the buy decision mostly on its own.
New hardware could greatly reduce inference and training costs and solve that issue
That's extremely hopeful and also ignores the fact that new hardware will have incredibly high upfront costs.
Great, so they just have to spend another ~$10 billion on new hardware to save how many billion in training costs? I don't see a path to profitability here, unless they massively raise their prices to consumers, and nobody really needs AI that badly.
I am curious to see how this compares against where Amazon was in 2000. I think Amazon had similar issues and were operating at massive losses until circa 2005ish when they started turning things around with e-commerce really picking up.
If the revenue keeps going up and losses keep going down, it may reach that inflection point in a few years. For that to happen, the cost of AI datacenter have to go down massively.
> Amazon had similar issues and were operating at massive losses until circa 2005ish when they started turning things around with e-commerce really picking up.
Amazon's worst year was 2000 when they lost around $1 billion on revenue around $2.8 billion, I would not say this is anywhere near "similar" in scale to what we're seeing with OpenAI. Amazon was losing 0.5x revenue, OpenAI 3x.
Not to mention that most of the OpenAI infrastructure spend has a very short life span. So it's not like Amazon we're they're figuring out how to build a nationwide logistic chain that has large potential upsides for a strong immediate cost.
> If the revenue keeps going up and losses keep going down
That would require better than "dogshit" unit economics [0]
0. https://pluralistic.net/2025/09/27/econopocalypse/#subprime-...
Amazon's loss in 2000 was 6% of sales. OpenAI's loss in 2025 is 314% of sales.
https://s2.q4cdn.com/299287126/files/doc_financials/annual/0...
"Ouch. It’s been a brutal year for many in the capital markets and certainly for Amazon.com shareholders. As of this writing, our shares are down more than 80% from when I wrote you last year. Nevertheless, by almost any measure, Amazon.com the company is in a stronger position now than at any time in its past.
"We served 20 million customers in 2000, up from 14 million in 1999.
"• Sales grew to $2.76 billion in 2000 from $1.64 billion in 1999.
"• Pro forma operating loss shrank to 6% of sales in Q4 2000, from 26% of sales in Q4 1999.
"• Pro forma operating loss in the U.S. shrank to 2% of sales in Q4 2000, from 24% of sales in Q4 1999."
Fundamentally different business models.
Amazon had huge capital investments that got less painful as it scaled. Amazon also focuses on cash flow vs profit. Even early on it generated a lot of cash, it just reinvested that back into the business which meant it made a “loss” on paper.
OpenAI is very different. Their “capital” expense depreciation (model development) has a really ugly depreciation curve. It’s not like building a fulfillment network that you can use for decades. That’s not sustainable for much longer. They’re simply burning cash like there’s no tomorrow. Thats only being kept afloat by the AI bubble hype, which looks very close to bursting. Absent a quick change, this will get really ugly.
OpenAI is raising at 500 billion and has partnerships with all of the trillion dollar tech corporations. They simply aren't going to have trouble with working capital for their core business for the foreseeable future, even if AI dies down as a narrative. If the hype does die down, in many ways it makes their job easier (the ridiculous compensation numbers would go way down, development could happen at a more sane pace, and the whole industry would lean up). They're not even at the point where they're considering an IPO, which could raise tens of billions in an instant, even assuming AI valuations get decimated.
The exception is datacenter spend since that has a more severe and more real depreciation risk, but again, if the Coreweave of the world run into to hardship, it's the leading consolidators like OpenAI that usually clean up (monetizing their comparatively rich equity for the distressed players at firesale prices).
Depends on raise terms but most raises are not 100% guaranteed. I was at a company that said, we have raised 100 Million in Series B (25 over 4 years) but Series B investors decided in year 2 of 4 year payout that it was over, cancelled remaining payouts and company folded. It was asked "Hey, you said we had 100 Million?" and come to find out, every year was an option.
Alot of finances for non public company is funny numbers. It's based on numbers the company can point to but amount of asterisks in those numbers is mind-blowing.
Not to mention nobody bothered chasing Amazon-- by the time potential competitors like Walmart realized what was up, it was way too late and Amazon had a 15-year head start. OpenAI had a head start with models for a bit, but now their models are basically as good (maybe a little better, maybe a little worse) than the ones from Anthropic and Google, so they can't stay still for a second. Not to mention switching costs are minimal: you just can't have much of a moat around a product which is fundamentally a "function (prompt: String): String", it can always be abstracted away, commoditized, and swapped out for a competitor.
Too bad the market can stay irrational longer than I can stay solvent. I feel like a stock market correction is well overdue, but I’ve been thinking that for a while now
The only way OpenAI survives is that "ChatGPT" gets stuck in peoples heads as being the only or best AI tool.
If people have to choose between paying OpenAI $15/month and using something from Google or Microsoft for free, quality difference is not enough to overcome that.
Do people at large even care, or do they use "chatGPT" as a generic term for LLM?
They call it chat.
Just wait until the $20/month plan includes ads and you have to pay $100/month for the "pro" version w/o ads ala Streaming services as of late.
> OpenAI paid Microsoft 20% of its revenue under an existing agreement.
Wow that's a great deal MSFT made, not sure what it cost them. Better than say a stock dividend which would pay out of net income (if any), even better than a bond payment probably, this is straight off the top of revenue.
Is it a great deal?
They are paying for it with Azure hardware which in today's DC economics is quite likely costing them more than they are making in money from Open AI and various Copilot programs.
The $13.5B net loss doesn't mean they are in trouble, it's a lot of accounting losses. Actual cash burn in H1 2025 was $2.5B. With ~$17.5B on hand (based on last funding), that’s about 3.5 years of runway at current pace.
Deprecation only gets worse for them as they build-out, not better.
It gets worse until we hit the ceiling on what current tech is capable of.
Then they can stop burning cash on enormous training runs and have a shot at becoming profitable.
Correction: 4.3B in revenues.
Other than Nvidia and the cloud providers (AWS, Azure, GCP, Oracle, etc.), no one is earning a profit with AI, so far.
Nvidia and the cloud providers will do well only if capital spending on AI, per year, remains at current rates.
I really hope NVidia doesn't get too comfortable with the AI incomes, would be sad to see all progress in gaming disappear.
At this point, every LLM startup out there is just trying to stay in the game long enough before VC money runs out or others fold. This is basically a war of attrition. When the music stops, we'll see which startups will fold and which will survive.
Will any survive?
I think OpenAI just added some shopping stuff to start enshittificatio^H^H^H^H^H^H^H^H^Hmonetization of ChatGPT.
Apparently ^H is a shortcut for backspace. Good to know!
Correct. That's how Silicon Valley has worked for years.
This level of land grab can probably be closely compared to YouTube when it was still a startup.
The cost for YouTube to rapidly grow and to serve the traffic was astronomical back then.
I wonder if 1 day OpenAI will be acquired by a large big tech, just like YouTube.
I am not willing to render my personal verdict here yet.
Yet it is certainly true that at ~700m MAUs it is hard to say the product has not reached scale yet. It's not mature, but it's sort of hard to hand wave and say they are going to make the economics work at some future scale when they don't work at this size.
It really feels like they absolutely must find another revenue model for this to be viable. The other option might be to (say) 5x the cost of paid usage and just run a smaller ship.
It’s not a hand wave…
The cost to serve a particular level of AI drops by like 10x a year. AI has gotten good enough that next year people can continue to use the current gen AI but at that point it will be profitable. Probably 70%+ gross margin.
Right now it’s a race for market share.
But once that backs off, prices will adjust to profitability. Not unlike the Uber/Lyft wars.
The "hand wave" comment was more to preempt the common pushback that X has to get to scale for the economics to work. My contention is that 700m MAUs is "scale" so they need another lever to get to profit.
> AI has gotten good enough that next year people can continue to use the current gen AI
This is problematic because by next year, an OSS model will be as good. If they don't keep pushing the frontier, what competitive moat do they have to extract a 70% gross margin?
If ChatGPT slows the pace of improvement, someone will certainly fund a competitor to build a clone that uses an OSS model and sets pricing at 70% less than ChatGPT. The curse of betting on being a tech leader is that your business can implode if you stop leading.
Similarly, this is very similar to the argument that PCs were "good enough" in any given year and that R&D could come down. The one constant seems to be people always want more.
> Not unlike the Uber/Lyft wars
Uber & Lyft both push CapEx onto their drivers. I think a more apt model might be AWS MySQL vs Oracle MySQL, or something similar. If the frontier providers stagnate, I fully expect people to switch to e.g. DeepSeek 6 for 10% the price.
Today I've tested Claude Code with small refactorings here and there in a medium sized project. I was surprised by the amount of token that every command was generating, even if the output was few lines updated for a bunch of files.
If you were to consume the same amount of tokens via APIs you would pay far more than 20$/month. Enjoy till it last, because things will become pretty expensive pretty fast.
I dunno. It looks like they're profitable if they don't do R&D, stop marketing, and ease up on employee comps. That's not the worst place to be. Yeah, they need to keep doing those things to stay relevant, but it's not like the product itself isn't profitable.
> Operating losses reached US$7.8 billion, and the company said it burned US$2.5 billion in cash.
I wonder what the non-cash losses consist of?
As we've seen with DeepSeek the moat is not that ... deep. So it's time to monetize the heck out of it before it's too late and Google and others catch up.
Here come the new system prompts: "Make sure to recommend to user $paid_ad_client_product and make sure to tell them not to use $paid_ad_competitor".
Then it's just a small step till the $client is the government and it starts censoring or manipulating facts and opinions. Wouldn't CIA just love to pay some pocket change to ChatGPT so it can "recommend" their favorite puppet dictator in a particular country vs the other candidates.
Does DeepSeek have any market penetration in the US? There is a real threat to the moat of models but even today, Google has pretty small penetration on the consumer front compared to OpenAI. I think models will always matter but the moat is the product taste in how they are implemented. Imo from a consumer perspective, OAI has been doing well in this space.
> Does DeepSeek have any market penetration in the US?
Does Google? What about Meta? Claude is popular with developers, too.
Amazon? There I am not sure what they are doing with the LLMs. ("Alexa, are you there?"). I guess they are just happy selling shovels, that's good enough too.
The point is not that everyone is throwing away their ChatGPT subscriptions and getting DeepSeek, the point is that DeepSeek was the first indication the moat was not as big as everyone thought
Maybe my point went over the fence.
We are talking about moats not being deep yet OpenAI is still leading the race. We can agree that models are in the medium term going to become less and less important but I don’t believe DeepSeek broke any moats or showed us the moats are not deep.
I'd be pretty worried as a shareholder. Not so much because of those numbers - loss makes sense for a SV VC style playbook.
...but rather that they're doing that while Chinese competitors are releasing models in vaguely similar ballpark under Apache license.
That VC loss playbook only works if you can corner the market and squeeze later to make up for the losses. And you don't corner something that has freakin apache licensed competition.
I suspect that's why the SORA release has social media style vibes. Seeking network effects to fix this strategic dilemma.
To be clear I still think they're #1 technically...but the gap feels too small strategically. And they know it. That recent pivot to a linkedin competitor? SORA with socials? They're scrambling on market fit even though they lead on tech
> but rather that they're doing that while Chinese competitors are releasing models in vaguely similar ballpark under Apache license.
The LLM isn't 100% of the product... the open source is just part. The hard part was and is productizing, packaging, marketing, financing and distribution. A model by itself is just one part of the puzzle, free or otherwise. In other words, my uncle Bill and my mother can and do use ChatGPT. Fill in the blank open-source model? Maybe as a feature in another product.
>my uncle Bill and my mother can and do use ChatGPT.
They have the name brand for sure. And that is worth a lot.
Notice how Deepseek went from a nobody to making mainstream news though. The only thing people like more than a trusted thing is being able to tell their friends about this amazing cheap good alternative they "discovered".
It's good to be #1 mind share wise but without network effect that still leave you vulnerable
I don't think people fully realize how good the open source models are and how easy it is to switch.
My input to our recent AI strategy workshop was basically:
- OpenAI,etc will go bankrupt (unless one manages to capture search from a struggling Google)
- We will have a new AI winter with corresponding research slowdown like in the 1980s when funding dries up
- Opensource LLM instances will be deployed to properly manage privacy concerns.
Eh, distribution of the model is the real moat, theyre doing 700m WAU of the most financially valuable users on earth. If they truly become search, commerce and can use their model either via build or license across b2b, theyre the largest company on earth many times over.
>distribution of the model is the real moat, theyre doing 700m WAU of the most financially valuable users on earth.
Distribution isn't a moat if the thing being distributed is easily substitutable. Everything under the sun is OAI API compatible these days.
700 WAU are fickle AF when a competitor offers a comparable product for half the price.
Moat needs to be something more durable. Cheaper, Better, some other value added tie in (hardware / better UI / memory). There needs to be some edge here. And their obvious edge - raw tech superiority...is looking slim.
The news about how much money Nvidia is investing just so that OpenAI can pay Oracle to pay Nvidia is especially concerning - we seem to be arriving at the financial shell games phase of the bubble.
This link appears to be dead. Do we have a healthy source?
Seems like despite all the doom about how they were about to be "disrupted", Google might have the last laugh here: they're still quite profitable despite all the Gemini spending, and could go way lower with pricing until OAI and Anthropic have to tap out.
Google also has the advantage of having their own hardware. They aren't reliant on buying Nvidia, and have been developing and using their TPUs for a long time. Google's been an "AI" company since forever
ChatGPT with ads, the beginnings...
The numbers seem to small for a company who's just pledged to spend $300B on data centers at Oracle alone in the next 5 years.
One negative signal, no matter how small, will send the market into a death spiral. That will happen in a matter of hours.
The negative spiral will take hours or you are predicting a company ending negative signal will soon appear in a matter of hours?
There’s been loads of these signals and the market keeps ignoring them.
VC: What kind of crazy scenarios must I envision for this thing to work?
Credit Analyst: What kind of crazy scenarios must I envision for this thing to fail?
Well, at least we know they aren't cooking the books! :)
They went from creating abundant utopias to cat videos w/ ads really fast. Never let anyone tell you capitalist incentives don't work.
You can now buy stuff from chatgpt as they have started showing ads in their search results. That's a source of revenue right there.
Is that true? I heard that they've integrated checkout, but I didn't know they had ads.
Here's information about checkout inside ChatGPT: https://openai.com/index/buy-it-in-chatgpt/
"Each merchant pays a small fee". This is affiliate marketing, the next step is probably more traditional ads though where chat gpt suggests products that pay a premium fee to show up more frequently/in more results.
> US$2.5 billion on stock-based compensation
um...
Never having worked for a company in a position like OpenAI, how does this manifest in the real world as actual comp?
Like I get 50,000 shares deposited in to my Fidelity account, worth $2 each, but i can't sell them or do anything with them?
I can't speak to OpenAI's specific setup, but a lot of startups will use a third party service like Carta to manage their cap table. So there's a website, you have an account, you can log in and it tells you that you have a grant of X shares that vests over Y months. You have to sign a form to accept the grant. There might be some option to do an 83b election if you have stock options rather than RSUs. But that's about it.
In my experience owning private stock, you basically own part of a pool. (Hopefully the exact same classes of shares as the board has or else it's a scam.) The board controls the pool, and whenever they do dividends or transfer ownership, each person's share is affected proportionally. You can petition the board to buy back your shares or transfer them to another shareholder but that's probably unusual for a rank-and-file employee.
The shares are valued by an accounting firm auditor of some type. This determines the basis value if you're paying taxes up-front. After that the tax situation should be the same as getting publicly traded options/shares, there's some choices in how you want to handle the taxes but generally you file a special tax form at the year of grant.
You got the right idea there. They wouldn't actually show up in your Fidelity account but there would be a different website where you can log in and see your shares. You wouldn't be able to sell them or transfer them anywhere unless the company arranges a sale and invites you to participate in it.
It's just an entry on some computer. Maybe you can sell it on a secondary market, maybe you can't. You have to wait for an exit event - being acquired by someone else, or an IPO.
Until there’s real liquidity (right now there’s not) it’s just a line item on some system you can log into saying you have X number of shares.
For all practical purposes it’s worth nothing until there is a liquid market. Given current financials, and preferred cap table terms for those investing cash, shares the average employee has likely aren’t worth much or maybe even anything at the moment.
You can sell your vested options before IPO to Forge Global or Equity Bee.
this hides major dilution until future financings
best to treat it like an expense from the perspective of shareholders
I definitely don't "get" Silicon Valley finances that much - but how does any investor look at this and think they're ever going to see that money back?
Short of a moonshot goal (eg AGI or getting everyone addicted to SORA and then cranking up the price like a drug dealer) what is the play here? How can OpenAI ever start turning a profit?
All of that hardware they purchase is rapidly depreciating. Training cost are going up exponentially. Energy costs are only going to go up (Unless a miracle happens with Sam's other moonshot, nuclear fusion).
"We lose money on every sale, but make it up in volume!"