This post is written with its intellectual fly open. I'm not sure whether it was partly AI-generated, or whether the author has spent so much time ingesting AI-generated content that the tells have rubbed off, but this article has:
- Strange paragraph-lists with bolded first words. e.g. "The Cash Flow Mystery"
- The 'It's not just X; it's Y' meme: "Buying Groq wouldn't just [...], it could give them a chip that is actually [...]. It’s a supply chain hedge."
Tells like:
- "My personal read? NVIDIA is [...]"
- "[...]. Now I'm looking at Groq, [...]"
However, even if these parts were AI generated, it's simultaneously riddled with typos and weird phrases:
- "it looks like they are squeezing each other [sic] balls."
- Stylization of OpenAI as 'Openai'.
Not sure what to make of this low-quality prose.
Even if the conclusion is broadly correct, that doesn't mean the reasoning used to get there is consistent.
I do, at least, appreciate that the author was honest up-front with respect to use of Gemini and other AI tools.
It does amuse me when you have great, clean writing in some parts of a post, but then you have a sentence like
> As we head into 2026, when looking at Nvidia, openai and Oracle dynamics, it looks like they are squeezing each other balls.
Yeah I don't think there's a snowball's chance in heck that an LLM wrote that one, lol. My best guess is that the author combed over some of their prose with an LLM, but not all.
1. I haven't commented on HN in a while and didn't want to dig up my password. Throwaway accounts are a tradition.
2. I don't want people to see my disparagement of the quality of prose in this article as indication of personal agreement or disagreement with any of the points in the article. I have no horse in this race. I just want to read high-quality material. I love HN, but I'm not sure how much longer HN will be a place I can frequent in this respect. Have the hills not eroded? What of childlike curiosity?
3. My comment is nothing special. There are others pointing out this article is AI generated. People can verify the contents of my comment independently and come to their own conclusions. It does not require that I lean on implied authority of some form.
I read a lot, it's basically all I do. I wish writers maintained the contract of spending at least as much energy writing out ideas as they expect their audience to expend while reading them.
I will now log out of this account and lose the password. I hope this was helpful. I intend no malice; I'm sure the author of this piece is a kind person and fun to hang out with. I hope they take this feedback the right way.
> However, Groq’s architecture relies on SRAM (Static RAM). Since SRAM is typically built in logic fabs (like TSMC) alongside the processors themselves, it theoretically shouldn't face the same supply chain crunch as HBM.
It's true SRAM comes with your logic, you get a TSMC N3 (or N6 or whatever) wafer, you got SRAM. Unfortunately SRAM just doesn't have the capacity you have to augment with DRAM which you see companies like D-Matrix and Cerebras doing. Perhaps you can use cheaper/more available LPDDR or GDDR (Nvidia have done this themselves with Rubin CPX) but that also has supply issues.
Note it's not really about parameter storage (which you can amortize over multiple users) it's KV cache storage which gets you and that scales with the user count.
Now Groq does appear to be going for a pure SRAM play but if the easily available pure SRAM thing comes at some multiple of the capital cost of the DRAM thing it's not a simple escape hatch from DRAM availability.
SRAM scaling also hit a wall a while ago, so you can't really count on new processes allowing for significantly higher density in the future. That's more of a longer-term issue with the SRAM gambit that'll come into play after the DRAM shortage is over though - logic and DRAM will keep improving while SRAM probably stays more or less where it is now.
You can still scale SRAM by stacking it in 3D layers, similar to the common approach now used with NAND flash. I think HBM DRAM is also directly stacked on-die to begin with, apparently that's the best approach to scaling memory bandwidth too.
It'll be interesting to see if we get any kind of non-NAND persistent memory in the near future, that might beat some performance metrics of both DRAM and NAND flash.
NAND is built with dozens of layers on one die. HBM DRAM is a dozen-ish dies stacked and interconnected with TSVs, but only one layer of memory cells per die. AMD's X3D CPUs have a single SRAM die stacked on top of the regular CPU+SRAM, with TSVs in the L3 cache to connect to the extra SRAM. I'm not aware of anyone shipping a product that stacks multiple SRAM dies; the tech definitely exists but it may not be economically feasible for any mass-produced product.
> AMD's X3D CPUs have a single SRAM die stacked on top of the regular CPU+SRAM, with TSVs in the L3 cache to connect to the extra SRAM.
Just FYI, the latest X3D flipped the stack; the cache die is now on the bottom. This helps transfer heat from the compute die to the heatsink more effectively. In armchair silicon designer mode, one could imagine this setup also adds potential for multiple cache dies stacked, since they do interpose all the signals, why not add a second one ... but I'm sure it's not that simple, for one: AMD wants the package z-heights to be consistent between the x3d and normal chip.
The issue is size, SRAM is 6 transistors per bit while DRAM is 1 transistor and a capacitor. Anyone who wants density starts with DRAM. There’s never been motivation to stack.
The specifics of the article zero sense. Net Income and Operating Cash Flow are not the same thing so there is no mystery about whey they are different especially in a business with large Capex and long lead times.
NVIDIA's historic DSO figures have also ranged from 41 to 57 over the last 5 years, so again not that crazy.
Since SRAM is typically built in logic fabs (like TSMC) alongside the processors themselves, it theoretically shouldn't face the same supply chain crunch as HBM.
DRAM and logic fabs are both sold out so replacing one with the other doesn't really help. And SRAM uses ~6x more silicon area than DRAM.
Groq may be undervalued but not for supply chain reasons.
The critiques of 'circular funding' don't really make sense to me. If you invest 20 billion and you get back 20 billion, your profit is the same. Sure your revenues look higher but investors have access to all that information and should be taking that into account, just like all the other financial data.
Michael Burry is betting against AI growth translating into real profits as a whole, not the circular funding.
Burrys critique is that the Nvidia funding deals have them investing money in a company and getting both stock in that company and their own money back to buy the chips. They then book the chip sales in revenue but they don’t show the investment as a cost, since investments are treated separately from an accounting perspective. So it looks like they’re growing revenue organically at no cost, while that doesn’t seem logically consistent with what’s actually happening.
The problem is that stocks are often valued and traded on revenue growth, not profit[0] So circular funding generates stock price bumps when, as you said, there's no inherent value underneath. Creates a recipe for a crash.
[0] consider pagerduty, incredibly profitable with little revenue growth. Trading at 1.5X revenue, where high revenue growth, unprofitable companies are trading at 10X revenue.
Both are taken into account. Potential profitability is taken into account with growth companies. Circular funding has no effect on that. With unprofitable companies case is made on how risky the company is and what the potential profit will be in the future.
I've run into this before in other industries as well. Sports franchises are notorious where they expect any company doing work for the franchise to then spend some of that money earned back with the franchise in forms of buying advertising, suites, etc to the point that very little money if any is made by the vendor.
It's worse than that. One side of the "circle" is 40 billion, the other side is 300. Why not just subtract it, and say 260 billion is going one way.
The real story is that Nvidia is accepting equity in their customers as a payment for their hardware. "What, you don't have cash to buy our chips? That's OK, you can pay by giving us 10% of everything you earn in perpetuity."
This has happened before, let's call it the "selling the goose that lays golden eggs scan." You can buy our machine that converts electricity into cash, but we will only take preorders, after all it is such a good deal. Then, after bulding the machines with the said preorder money, they of course plugged the machines in themselves instead of shipping them, claiming various "delays" in production. Here I'm talking about the bitcoin mining hardware when the said hardware first appeared.
Nvidia is doing similar thing, just instead of doing it 100% themselves, they are 10% in by acquiring the equity in their customers.
> Here I'm talking about the bitcoin mining hardware when the said hardware first appeared.
even better, we take preorders, while we delay for 1 year, we run the ASICs ourselves with way outsized TH/s power compared to the world. Once we develop the next one, we release the 'new' one to the public with 1/10th of the power.
If you invest $100B and get back $40B in sales, you're investing $60B of money and $40B of your products. This is simple stuff. The question is whether or not it is a good investment. Probably not.
It's certainly a problem when circular investment structures are used to get around legal limits on the amount of leverage or fractional reserve, or to dodge taxes from bringing offshore funds onshore.
Plenty of sneaky ways of using different accounting years offshore to push taxes forward indefinitely too, since the profit is never present at the year end.
this isn't an investing site but Coreweave is what I watch. All those freaking datacenters have to get built, come online, and work for all the promises to come true. Coreweave is already in a bit of a picklye, I feel like they are the first domino.
/not an investing/finance/anything to do with money expert.
0% NET accretive profit - the OP was saying that the invest/return wash doesn't affect prior profitability, just revenue. Obviously, the new profitability inclusive of the new revenue will actually by lower because of the zero margin wash trade.
Just because it's legal and in the open doesn't mean it's sound or not creating perverse incentives. Investors that "should be taking that into account" probably are, and hoping that they come out on top when the bubble bursts. That means pain for many people. Those are very valid reasons to point the finger and criticize.
Not a big fan of the circular observation. It's not the gotcha people seem to think.
If the baker sells bread to the butcher, and the butcher sells meat to the baker then they can still both go to bed with a belly full of sandwich (aka actual utility & substance).
Adding a third party to make it look more circle-y doesn't change that logic.
Round trip financing is mostly an issue if it is artificial (e.g. a circle of loans) and between affiliated parties, not when something of substance is delivered. Oracle is a business partner of nvidia but I'd wager they'll still kick up a fuss if they don't get their pallets of GB200s. They'll expect actual delivery...like you know...in a real sale.
Isn't circular funding how the entire economy works?
I can see how you could make an argument that this particular ouroboros has an insufficient loop area to sustain itself, or more significantly, lacks connection to the rest of the economy, but money has to flow in circles/cycles or it doesn't work at all.
Parties in an economy don't normally buy something that they sell at the same time. It's hazier than that here, but still looks like Nvidia is buying GPUs from itself via OpenAI and Oracle.
Btw there are examples involving sanctioned economies. Most US saffron comes from Spain, all of whose saffron comes from Iran. Azerbaijan exports way more gas than they produce, cause they also buy from Russia.
Only when using non aligned intwrests, like government funding roads.
When interests directly align and parties are largely owned by the same people, its wash trading.
The point of wash trading is to make activity increase the value of an asset via a netzero activity. Since nothing is generated from the activity its circular, eg, nothing physical changes hands.
Crypto trading is the golden child of wash trading as the primary mode of increasing the value of an asset.
Its unsurprising then that the company that got rich on crypto wash trading is doing its own attempts to drive artificial demand.
The circular funding is concerning, but more concerning are suggestions that supply might be vastly exceeding demand. Not that people don’t want chips but that the chip production now exceeds the ability to power them up and use them. The shortage is power and racks in data centers ready to go. Folks are running numbers suggesting there’s a bunched chips now just sitting around.
That, combined with some cooling from an AI hype bubble burst (see separate articles about companies missing quota as folks aren’t buying as much AI as the hype hoped) and there’s a potential ugly future where the headline demand plummets in top of idle chips waiting to be powered on. Suddenly the market is flooded with chips nobody wants.
> However, Groq’s architecture relies on SRAM (Static RAM). Since SRAM is typically built in logic fabs (like TSMC) alongside the processors themselves, it theoretically shouldn't face the same supply chain crunch as HBM.
>
> Looking at all those pieces, I feel Oracle should seriously look into buying Groq.
I don't see why. Graphcore bet on SRAM and that backfired because unless you go for insane wafer scale integration like Cerebras, you don't remotely get enough memory for modern LLMs. Graphcore's chip only got to 900MB (which is both a crazy amount and not remotely enough). They've pivoted to DRAM.
You could make an argument for buying Cerebras I guess, but even at 3x the price, DRAM is just so much more cost effective than SRAM I don't see how it can make any sense for LLMs.
Forget about DRAM vs. SRAM or whatever: How does a cheaper source of non-Nvidia GPUs help Oracle? They’re not training models or even directly in the inference business. Their pitch is cloud infra for AI, and today that means CUDA & Nvidia or you’re severely limiting your addressable market.
I appreciate the disclosures about Gemeni and Nano Banana, but does that start to feel a little like a conflict of interest or something similar in an article discussing their competition?
Pull our POV back far enough, and isn't "circular funding" just "The economy?"
Money circulates; it's what it does. The real question is to what extent circulation among a small group of firms is either collusion in disguise (i.e. decisionmaking by only one actual entity falsely measured as multiple independent entities) or a fragile ecosystem masquerading as a healthy one (i.e. an "island economy" where things look great in the current status quo, but the moment the fish go away the entire cycle instantly collapses).
This is what happens when highly confident uneducated people read slop from other uneducated people with an agenda (Twitter, Burry et al) and then regurgitate more slop.
There is no circular funding. There’s certainly circular speculation that is driving up the prices but the revenues are all accounted.
The DSO change is meaningless if you understand accounting.
The inventory building up is the cost of materials and incomplete inventory. It’s not chips sitting around waiting to be deployed.
> holding ~120 days of inventory seems like a huge capital drag to me.
Yeah I guess this guy who knows nothing about running a business like Nvidia is allowed to make confident statements like this despite no education or experience.
This article is garbage and he wasted his 48 hrs investigating the same things I read in another worthless tweet several weeks ago.
The Burry short is just one data point, but the "facts we know" are piling up fast.
Here is a possible roadmap for the coming correction:
1. The Timeline:
We are looking at a winter. A very dark and cold winter. Whether it hits before Christmas or mid-Q1 is a rounding error; the gap between valuations and fundamentals has widened enough to be physically uncomfortable.
The Burry thesis—focused on depreciation schedules and circular revenue—is likely just the mechanical trigger for a sentiment cascade.
2. The Big Players:
Google: Likely takes the smallest hit. A merger between DeepMind and Anthropic is not far-fetched (unless Satya goes all the way).
By consolidating the most capable models under one roof, Google insulates itself from the hardware crash better than anyone else.
OpenAI: They look "half naked." It is becoming impossible to ignore the leadership vacuum. It’s hard to find people who’ve worked closely with Altman who speak well of his integrity, and the exits of Sutskever, Schulman, and others tell the real story.
For a company at that valuation, leadership credibility isn’t a soft factor—it’s a structural risk.
3. The "Pre-Product" Unicorns:
We are going to see a reality check for the ex-OpenAI, pre-product, multi-billion valuation labs like SSI and Thinking Machines.
These are prime candidates for "acquihres" once capital tightens. They are built on assumptions of infinite capital availability that are about to evaporate.
4. The Downstream Impact:
The second and third tier—specifically recent YC batches built on API wrappers and hype—will suffer the most from this catastrophic twister.
When the tide goes out, the "Yes" men who got carried away by the wave will be shouting the loudest, pretending they saw it coming all along
I don't believe your comment is just a direct dump out of an LLM's output, mainly because of the minor typo of "acquihires", but as much as I'd love to ignore superficial things and focus on the substance of a post, the LLM smells in this comment are genuinely too hard to ignore. And I don't just mean because there's em-dashes, I do that too. Specifically these patterns stink very strong of LLM fluff:
> leadership credibility isn’t a soft factor—it’s a structural risk.
> The Timeline/The Big Players/The "Pre-Product" Unicorns/The Downstream Impact
If you really just write like this entirely naturally then I feel bad, but unfortunately I think this writing style is just tainted.
This post is written with its intellectual fly open. I'm not sure whether it was partly AI-generated, or whether the author has spent so much time ingesting AI-generated content that the tells have rubbed off, but this article has:
- Strange paragraph-lists with bolded first words. e.g. "The Cash Flow Mystery"
- The 'It's not just X; it's Y' meme: "Buying Groq wouldn't just [...], it could give them a chip that is actually [...]. It’s a supply chain hedge."
Tells like:
- "My personal read? NVIDIA is [...]"
- "[...]. Now I'm looking at Groq, [...]"
However, even if these parts were AI generated, it's simultaneously riddled with typos and weird phrases:
- "it looks like they are squeezing each other [sic] balls."
- Stylization of OpenAI as 'Openai'.
Not sure what to make of this low-quality prose.
Even if the conclusion is broadly correct, that doesn't mean the reasoning used to get there is consistent.
I do, at least, appreciate that the author was honest up-front with respect to use of Gemini and other AI tools.
Final grade: D+.
It does amuse me when you have great, clean writing in some parts of a post, but then you have a sentence like
> As we head into 2026, when looking at Nvidia, openai and Oracle dynamics, it looks like they are squeezing each other balls.
Yeah I don't think there's a snowball's chance in heck that an LLM wrote that one, lol. My best guess is that the author combed over some of their prose with an LLM, but not all.
> Even if the conclusion is broadly correct, that doesn't mean the reasoning used to get there is consistent.
This is the conclusion of a reply that focused entirely on critiquing OP's style/AI use instead of their reasoning? Ironic.
Poe's law strikes again! I'm glad someone got it.
Why use a throwaway account if posting an honest critique?
1. I haven't commented on HN in a while and didn't want to dig up my password. Throwaway accounts are a tradition.
2. I don't want people to see my disparagement of the quality of prose in this article as indication of personal agreement or disagreement with any of the points in the article. I have no horse in this race. I just want to read high-quality material. I love HN, but I'm not sure how much longer HN will be a place I can frequent in this respect. Have the hills not eroded? What of childlike curiosity?
3. My comment is nothing special. There are others pointing out this article is AI generated. People can verify the contents of my comment independently and come to their own conclusions. It does not require that I lean on implied authority of some form.
I read a lot, it's basically all I do. I wish writers maintained the contract of spending at least as much energy writing out ideas as they expect their audience to expend while reading them.
I will now log out of this account and lose the password. I hope this was helpful. I intend no malice; I'm sure the author of this piece is a kind person and fun to hang out with. I hope they take this feedback the right way.
avoiding vindictiveness perhaps
Is this the new form of ad-hominem? ad-AI?
HN is full of know-little blogspam, although rarely does it get top top like this one did.
> However, Groq’s architecture relies on SRAM (Static RAM). Since SRAM is typically built in logic fabs (like TSMC) alongside the processors themselves, it theoretically shouldn't face the same supply chain crunch as HBM.
It's true SRAM comes with your logic, you get a TSMC N3 (or N6 or whatever) wafer, you got SRAM. Unfortunately SRAM just doesn't have the capacity you have to augment with DRAM which you see companies like D-Matrix and Cerebras doing. Perhaps you can use cheaper/more available LPDDR or GDDR (Nvidia have done this themselves with Rubin CPX) but that also has supply issues.
Note it's not really about parameter storage (which you can amortize over multiple users) it's KV cache storage which gets you and that scales with the user count.
Now Groq does appear to be going for a pure SRAM play but if the easily available pure SRAM thing comes at some multiple of the capital cost of the DRAM thing it's not a simple escape hatch from DRAM availability.
SRAM scaling also hit a wall a while ago, so you can't really count on new processes allowing for significantly higher density in the future. That's more of a longer-term issue with the SRAM gambit that'll come into play after the DRAM shortage is over though - logic and DRAM will keep improving while SRAM probably stays more or less where it is now.
You can still scale SRAM by stacking it in 3D layers, similar to the common approach now used with NAND flash. I think HBM DRAM is also directly stacked on-die to begin with, apparently that's the best approach to scaling memory bandwidth too.
It'll be interesting to see if we get any kind of non-NAND persistent memory in the near future, that might beat some performance metrics of both DRAM and NAND flash.
NAND is built with dozens of layers on one die. HBM DRAM is a dozen-ish dies stacked and interconnected with TSVs, but only one layer of memory cells per die. AMD's X3D CPUs have a single SRAM die stacked on top of the regular CPU+SRAM, with TSVs in the L3 cache to connect to the extra SRAM. I'm not aware of anyone shipping a product that stacks multiple SRAM dies; the tech definitely exists but it may not be economically feasible for any mass-produced product.
> AMD's X3D CPUs have a single SRAM die stacked on top of the regular CPU+SRAM, with TSVs in the L3 cache to connect to the extra SRAM.
Just FYI, the latest X3D flipped the stack; the cache die is now on the bottom. This helps transfer heat from the compute die to the heatsink more effectively. In armchair silicon designer mode, one could imagine this setup also adds potential for multiple cache dies stacked, since they do interpose all the signals, why not add a second one ... but I'm sure it's not that simple, for one: AMD wants the package z-heights to be consistent between the x3d and normal chip.
The issue is size, SRAM is 6 transistors per bit while DRAM is 1 transistor and a capacitor. Anyone who wants density starts with DRAM. There’s never been motivation to stack.
The specifics of the article zero sense. Net Income and Operating Cash Flow are not the same thing so there is no mystery about whey they are different especially in a business with large Capex and long lead times.
NVIDIA's historic DSO figures have also ranged from 41 to 57 over the last 5 years, so again not that crazy.
Since SRAM is typically built in logic fabs (like TSMC) alongside the processors themselves, it theoretically shouldn't face the same supply chain crunch as HBM.
DRAM and logic fabs are both sold out so replacing one with the other doesn't really help. And SRAM uses ~6x more silicon area than DRAM.
Groq may be undervalued but not for supply chain reasons.
The critiques of 'circular funding' don't really make sense to me. If you invest 20 billion and you get back 20 billion, your profit is the same. Sure your revenues look higher but investors have access to all that information and should be taking that into account, just like all the other financial data.
Michael Burry is betting against AI growth translating into real profits as a whole, not the circular funding.
Burrys critique is that the Nvidia funding deals have them investing money in a company and getting both stock in that company and their own money back to buy the chips. They then book the chip sales in revenue but they don’t show the investment as a cost, since investments are treated separately from an accounting perspective. So it looks like they’re growing revenue organically at no cost, while that doesn’t seem logically consistent with what’s actually happening.
The problem is that stocks are often valued and traded on revenue growth, not profit[0] So circular funding generates stock price bumps when, as you said, there's no inherent value underneath. Creates a recipe for a crash.
[0] consider pagerduty, incredibly profitable with little revenue growth. Trading at 1.5X revenue, where high revenue growth, unprofitable companies are trading at 10X revenue.
Both are taken into account. Potential profitability is taken into account with growth companies. Circular funding has no effect on that. With unprofitable companies case is made on how risky the company is and what the potential profit will be in the future.
Its crypto wash trading.
I've run into this before in other industries as well. Sports franchises are notorious where they expect any company doing work for the franchise to then spend some of that money earned back with the franchise in forms of buying advertising, suites, etc to the point that very little money if any is made by the vendor.
It's worse than that. One side of the "circle" is 40 billion, the other side is 300. Why not just subtract it, and say 260 billion is going one way.
The real story is that Nvidia is accepting equity in their customers as a payment for their hardware. "What, you don't have cash to buy our chips? That's OK, you can pay by giving us 10% of everything you earn in perpetuity."
This has happened before, let's call it the "selling the goose that lays golden eggs scan." You can buy our machine that converts electricity into cash, but we will only take preorders, after all it is such a good deal. Then, after bulding the machines with the said preorder money, they of course plugged the machines in themselves instead of shipping them, claiming various "delays" in production. Here I'm talking about the bitcoin mining hardware when the said hardware first appeared.
Nvidia is doing similar thing, just instead of doing it 100% themselves, they are 10% in by acquiring the equity in their customers.
> Here I'm talking about the bitcoin mining hardware when the said hardware first appeared.
even better, we take preorders, while we delay for 1 year, we run the ASICs ourselves with way outsized TH/s power compared to the world. Once we develop the next one, we release the 'new' one to the public with 1/10th of the power.
If you invest $100B and get back $40B in sales, you're investing $60B of money and $40B of your products. This is simple stuff. The question is whether or not it is a good investment. Probably not.
It's certainly a problem when circular investment structures are used to get around legal limits on the amount of leverage or fractional reserve, or to dodge taxes from bringing offshore funds onshore.
Plenty of sneaky ways of using different accounting years offshore to push taxes forward indefinitely too, since the profit is never present at the year end.
Nvidia could have invested elsewhere, but they’re doubling down on AI.
Their shares have been tanking for a month, even after a very good earnings report, so perhaps the market seeks a little more diversity?
this isn't an investing site but Coreweave is what I watch. All those freaking datacenters have to get built, come online, and work for all the promises to come true. Coreweave is already in a bit of a picklye, I feel like they are the first domino.
/not an investing/finance/anything to do with money expert.
if you invest 20 and get 20 then you got 0% profit
0% NET accretive profit - the OP was saying that the invest/return wash doesn't affect prior profitability, just revenue. Obviously, the new profitability inclusive of the new revenue will actually by lower because of the zero margin wash trade.
Why would you say that? If take cash and buy an asset I haven't lost money.
Depends on when 20 goes in and 20 comes out.
And how inflation and interest are accounted.
Just because it's legal and in the open doesn't mean it's sound or not creating perverse incentives. Investors that "should be taking that into account" probably are, and hoping that they come out on top when the bubble bursts. That means pain for many people. Those are very valid reasons to point the finger and criticize.
>> If you invest 20 billion and you get back 20 billion
Its about keeping Wall Street bubble momentum, not financials.
Isn't that the entire stock market for the last 20 years?
Not a big fan of the circular observation. It's not the gotcha people seem to think.
If the baker sells bread to the butcher, and the butcher sells meat to the baker then they can still both go to bed with a belly full of sandwich (aka actual utility & substance).
Adding a third party to make it look more circle-y doesn't change that logic.
Round trip financing is mostly an issue if it is artificial (e.g. a circle of loans) and between affiliated parties, not when something of substance is delivered. Oracle is a business partner of nvidia but I'd wager they'll still kick up a fuss if they don't get their pallets of GB200s. They'll expect actual delivery...like you know...in a real sale.
Regardless of the content itself, using Nano Banana to illustrate OpenAI's financial bullshittery is some grade A snark.
Isn't circular funding how the entire economy works?
I can see how you could make an argument that this particular ouroboros has an insufficient loop area to sustain itself, or more significantly, lacks connection to the rest of the economy, but money has to flow in circles/cycles or it doesn't work at all.
Parties in an economy don't normally buy something that they sell at the same time. It's hazier than that here, but still looks like Nvidia is buying GPUs from itself via OpenAI and Oracle.
Btw there are examples involving sanctioned economies. Most US saffron comes from Spain, all of whose saffron comes from Iran. Azerbaijan exports way more gas than they produce, cause they also buy from Russia.
Only when using non aligned intwrests, like government funding roads.
When interests directly align and parties are largely owned by the same people, its wash trading.
The point of wash trading is to make activity increase the value of an asset via a netzero activity. Since nothing is generated from the activity its circular, eg, nothing physical changes hands.
Crypto trading is the golden child of wash trading as the primary mode of increasing the value of an asset.
Its unsurprising then that the company that got rich on crypto wash trading is doing its own attempts to drive artificial demand.
AI-generated section “NVIDIA’s earnings” nerfed the credibility of this piece.
The circular funding is concerning, but more concerning are suggestions that supply might be vastly exceeding demand. Not that people don’t want chips but that the chip production now exceeds the ability to power them up and use them. The shortage is power and racks in data centers ready to go. Folks are running numbers suggesting there’s a bunched chips now just sitting around.
That, combined with some cooling from an AI hype bubble burst (see separate articles about companies missing quota as folks aren’t buying as much AI as the hype hoped) and there’s a potential ugly future where the headline demand plummets in top of idle chips waiting to be powered on. Suddenly the market is flooded with chips nobody wants.
> However, Groq’s architecture relies on SRAM (Static RAM). Since SRAM is typically built in logic fabs (like TSMC) alongside the processors themselves, it theoretically shouldn't face the same supply chain crunch as HBM. > > Looking at all those pieces, I feel Oracle should seriously look into buying Groq.
I don't see why. Graphcore bet on SRAM and that backfired because unless you go for insane wafer scale integration like Cerebras, you don't remotely get enough memory for modern LLMs. Graphcore's chip only got to 900MB (which is both a crazy amount and not remotely enough). They've pivoted to DRAM.
You could make an argument for buying Cerebras I guess, but even at 3x the price, DRAM is just so much more cost effective than SRAM I don't see how it can make any sense for LLMs.
Forget about DRAM vs. SRAM or whatever: How does a cheaper source of non-Nvidia GPUs help Oracle? They’re not training models or even directly in the inference business. Their pitch is cloud infra for AI, and today that means CUDA & Nvidia or you’re severely limiting your addressable market.
I appreciate the disclosures about Gemeni and Nano Banana, but does that start to feel a little like a conflict of interest or something similar in an article discussing their competition?
Its so wild to me that people that should know better pretend that this kind of stuff doesn't happen in every industry.
At this scale? Why don't you give some examples.
Pull our POV back far enough, and isn't "circular funding" just "The economy?"
Money circulates; it's what it does. The real question is to what extent circulation among a small group of firms is either collusion in disguise (i.e. decisionmaking by only one actual entity falsely measured as multiple independent entities) or a fragile ecosystem masquerading as a healthy one (i.e. an "island economy" where things look great in the current status quo, but the moment the fish go away the entire cycle instantly collapses).
> quietly arming themselves for a breakout.
If I wanted to read Gemini's opinion about this issue with the voice of a crank technical analysis, I would
This is what happens when highly confident uneducated people read slop from other uneducated people with an agenda (Twitter, Burry et al) and then regurgitate more slop.
There is no circular funding. There’s certainly circular speculation that is driving up the prices but the revenues are all accounted.
The DSO change is meaningless if you understand accounting.
The inventory building up is the cost of materials and incomplete inventory. It’s not chips sitting around waiting to be deployed.
> holding ~120 days of inventory seems like a huge capital drag to me.
Yeah I guess this guy who knows nothing about running a business like Nvidia is allowed to make confident statements like this despite no education or experience.
This article is garbage and he wasted his 48 hrs investigating the same things I read in another worthless tweet several weeks ago.
I agree it's poorly written, but I'm _much_ more interested in whether it is correct, or incorrect. Do you believe it is incorrect?
Apparently it doesn't matter to them
Isn't news of Bury or Pelosi, or anyone else's investments usually 3 months old?
My god, this article and half the comments here seem like they came from AI.
Dead internet much?
You're absolutely right!
Wow, calflegal. That's not just good insight, it's smart thinking.
I think you're correct — there's a lot of LLM generated content here!
The Burry short is just one data point, but the "facts we know" are piling up fast.
Here is a possible roadmap for the coming correction:
1. The Timeline: We are looking at a winter. A very dark and cold winter. Whether it hits before Christmas or mid-Q1 is a rounding error; the gap between valuations and fundamentals has widened enough to be physically uncomfortable.
The Burry thesis—focused on depreciation schedules and circular revenue—is likely just the mechanical trigger for a sentiment cascade.
2. The Big Players:
Google: Likely takes the smallest hit. A merger between DeepMind and Anthropic is not far-fetched (unless Satya goes all the way).
By consolidating the most capable models under one roof, Google insulates itself from the hardware crash better than anyone else.
OpenAI: They look "half naked." It is becoming impossible to ignore the leadership vacuum. It’s hard to find people who’ve worked closely with Altman who speak well of his integrity, and the exits of Sutskever, Schulman, and others tell the real story.
For a company at that valuation, leadership credibility isn’t a soft factor—it’s a structural risk.
3. The "Pre-Product" Unicorns: We are going to see a reality check for the ex-OpenAI, pre-product, multi-billion valuation labs like SSI and Thinking Machines.
These are prime candidates for "acquihres" once capital tightens. They are built on assumptions of infinite capital availability that are about to evaporate.
4. The Downstream Impact:
The second and third tier—specifically recent YC batches built on API wrappers and hype—will suffer the most from this catastrophic twister.
When the tide goes out, the "Yes" men who got carried away by the wave will be shouting the loudest, pretending they saw it coming all along
I don't believe your comment is just a direct dump out of an LLM's output, mainly because of the minor typo of "acquihires", but as much as I'd love to ignore superficial things and focus on the substance of a post, the LLM smells in this comment are genuinely too hard to ignore. And I don't just mean because there's em-dashes, I do that too. Specifically these patterns stink very strong of LLM fluff:
> leadership credibility isn’t a soft factor—it’s a structural risk.
> The Timeline/The Big Players/The "Pre-Product" Unicorns/The Downstream Impact
If you really just write like this entirely naturally then I feel bad, but unfortunately I think this writing style is just tainted.
Very helpful, an AI comment analyzing an analysis of AI