The trough of disillusionment is going to be frightening, and I believe we're starting to see the first signs of it (with Apple's paper, ironically enough).
I really wonder how the future tariffs are going to shape the AI industry. Are we going to see huge AI clusters being hosted overseas? Will it eat into Nvidia's bottom line or will consumers just eat the price increase?
People are paying for the software, which is made here in the US, and also not subject to tariffs per se. Same could be said about Apple too - you are buying the iPhone for its software, that is what you are paying for, and the software is made here in the US. The customers, the company and the country enjoy the best surpluses in these scenarios. The tariff is like a glorified tax, it's not going to change much.
Illegal models! I'm sure there will be more regulation to come but the law has already struggled a lot with the internet and the transactions through it. I wonder how much trial and error this wave will cause.
I it’s not like he shied away from them before. I think he’s starting with a high number to force concessions from China, but we’ll still have some amount of more broad tariffs in the end.
Our health system is so screwed up it’s no wonder they can’t make anything happen. There are only two paths to really fix it, and both would cause a lot of short term issues.
Trump is stupid and insane and a coward. But I don’t think he’s so stupid enough to put such egregious tariffs up. Maybe someone can explain how the economy works to him like a 5-year old and maybe he’ll get it.
I think he mentioned the tariffs to try the election. I don't think he's dumb enough to raise tariffs on every goods which lead to inflation which lead to low approval ratings.
Nvidia also has CUDA, which has surpassed all rival attempts at GPU programming frameworks in the last 2 decades. I remember learning OpenCL and rooting for it to become the standard but couldn't find a single job for that role.
CUDA is totally a moat. ROCM exists but AMD is like 100k engineering years behind NVDA just in terms of how much time NVDA had to invest into CUDA and all the ecosystem around it.
need dataframes and pandas? cuDF.
need compression/decompression? nvcomp.
need vector search? cuVS.
and the list goes on and on and on.
> need dataframes and pandas? cuDF. need compression/decompression? nvcomp. need vector search? cuVS. and the list goes on and on and on.
Sure, but that doesn’t mean I’m going to pay a billion extra for my next cluster – a cluster that just does matrix multiplication and exponentiation over and over again really fast.
So I’d say CUDA is clearly a moat for the (relatively tiny) GPGPU space, but not for large scale production AI.
I’ve been using GPU accelerated computing for more than 10 years now. I write CUDA kernels in my sleep. I have seen 10s of frameworks and 10s of chip companies come and go.
I have NEVER even HEARD of Rocm, and neither has anyone in the GPU programming slack group I just asked.
That's true but at the same time these are mostly I phones accessories. Should the iPhone fall out of fashion (which I don't expect but it's a thought experiment) then the sales of these products will decline as well, so is that really diversification?
But there are people who owns those iPhones, and many of them care that it’s an Apple iPhone. When you use ChatGPT, do you care if the matrix multiplications were done on genuine Nvidia hardware, or custom OpenAI ASICs?
In gaming, people do care if it's Nvidia or AMD. They want Nvidia.
For AI, I assume enterprise do care if it's Nvidia. Right now, Nvidia is in the "no one ever got fired for buying Nvidia" camp. You can buy AMD to save a few dollars, run into issues, and get fired.
“Apple annual research and development expenses for 2024 were $31.37B, a 4.86% increase from 2023. Apple annual research and development expenses for 2023 were $29.915B, a 13.96% increase from 2022. Apple annual research and development expenses for 2022 were $26.251B, a 19.79% increase from 2021”
> They seem to have lost faith in their own ability to innovate.
As they should. I mean they can, but they have to change course. All of Silicon Valley has tried to disenfranchise the power users. With excuses that most people don't want those things or how users are too dumb. But the power users are what drives the innovation. Sure, they're a small percentage, but they are the ones who come into your company and hit the ground running. They are the ones that will get to know the systems in and out. They do these things because they specifically want to accomplish things that the devices/software doesn't already do. In other words: innovation. But everyone (Google and Microsoft included) are building walled gardens. Pushing out access. So what do you do? You get the business team to innovate. So what do they come up with? "idk, make it smaller?" "these people are going wild over that gpt thing, let's integrate that!"
But here's the truth: there is no average user. Or rather, the average user is not representative of the distribution of users. If you build for average, you build for no one. It is hard to invent things, so use the power of scale. It is literally at your fingertips if you want it. Take advantage of the fact that you have a cash cow. That means you can take risks, that you can slow down and make sure you are doing things right. You're not going to die tomorrow if you don't ship, you can take on hard problems and *really* innovate. But you have to take off the chains. Yes, powerful tools are scary, but that doesn't mean you shouldn't use them.
> the average user is not representative of the distribution of users.
What does this mean? Just thinking about iPhones: As of September 2024, there are an estimated 1.382 billion active iPhone users worldwide, which is a 3.6% increase from the previous year. In the United States, there are over 150 million active iPhone users.
Are you math inclined? This is easier to explain with math (words) but I can put it in more English if you want.
If you're remotely familiar with high dimensional statistics, one of the most well known facts is that the density of a normal ball lies on the shell while the uniform ball is evenly distributed. Meaning if you average samples of a normal ball, the result is not representative of the samples. The average is inside the ball, but remember, all the sampling comes from the shell! It is like drawing a straight line between two points on a basketball, the middle of that line is going to be air, not rubber. But if you do for a uniform ball, it is. That's the definition of uniform... Understanding this, we know that users preference is not determined by a single thing, and honestly, this fact becomes meaningful when we're talking like 5 dimensions...[0]. This fact isn't just true for normal balls, it is true for any distribution that is not uniform.
To try to put this is more English: there are 1.382 billion active iPhone users world wide. They come from nearly 200 countries. The average person in Silicon Valley doesn't want the same thing as the average person in Fresno California. Do you think the average person in Japan wants the same thing as the average Californian? The average American? The average Peruvian? Taste and preference vary dramatically. You aren't going to make a meal that everyone likes, but if you make a meal with no flavor, at least everyone will eat it. What I'm saying is that if you try to make something for everyone, you make something with no flavor, something without any soul. The best things in life are personal. The things you find most enjoyable are not always going to be what your partner, your best friends, your family, or even your neighbor finds most enjoyable. We may have many similarities, but our differences are the spice of life, they are what make us unique. It is what makes us individuals. We all wear different size pants, why would you think we'd all want to put the same magic square in our pockets (if we even have pockets). We can go deeper with the clothing or food analogy, but I think you get that a chef knows how to make more than one dish and a clothing designer knows you need to make more than one thing in different sizes and colors.
In addition to the GPUs (which they invented) that Nvidia designs and manufactures for gaming, cryptocurrency mining, and other professional applications, the company also creates chip systems for use in vehicles, robotics, and other tools.
It is technically correct if you take "GPU" to mean "capable of doing 2D and 3D graphics on a single chip." But I think that's hair-splitting to the point of advertisement: the older 3dfx Voodoo was a GPU that handled the 3D math while the video card itself handled the 2D math. And of course that 2D video card itself had a unit which processed graphics! "Both in one chip" is an important milestone but it's pretty arbitrary as a definition of GPU.
Some may say that it is not a "real GPU" or certain features (like 3d) are missing to make it one.
The Nvidia claim is for the GeForce 256 released in 99.
This makes me wonder if our grandkids will be debating on what the first "real AI chip" was - would it be what we call a GPU like the H100 or will a TPU get that title?
I agree, but a more deeper question in my post was when did the old technology evolve into what we call a GPU today?
I don't want to rewrite history either.
It's partially telling that you write "2D/3D accelerators" which means that was a different class of thing - if they were a full GPU then you would have called them as such.
My point being - what defines what a GPU is? Apparently there were things called GTEs, accelerators, and so on. Some feature or invention crossed the line for us to label them as GPUs.
Just like over the last ~10 years we have seen GPUs losing graphical features and picking up NN/AI/LLM stuff to the point we now call these TPUs.
Will the future have confusion over the first ~AI CHIP~? Some conversation like
"Oh technically that was a GPU but it has also incorporated tensor processing so by todays standards it's an AI CHIP."
> What defines a GPU? it's a compute unit that process graphics, yes it is that simple
But you originally declared the Sony chip as the first GPU. There were many things that processed graphics before that, as you have declared. Apparently going back to the Amiga in the 70s.
It is this muddyness with retroactively declaring tech a certain kind that I'm questing here.
Why is market cap the most important metric? It does not exist in a void. Market cap + P/E or forward P/E, P/B, P/S, etc. are all metrics of high import.
Market cap is one piece of a complicated picture. On its own, you don't know too much.
Why is market cap the most important metric? It does not exist in a void. Market cap + P/E or forward P/E, P/B, P/S, etc. are all metrics of high import.
I thought it was obvious that I implied market cap increase by percentage.
When you're evaluating your ROI on tech stocks, the #1 factor is market cap delta between when you bought the shares and now.
For a public company, it's all about return on investment, right? Market cap is the most important factor in return on investment right? Therefore, market cap is the most important measure.
I didn't say you should invest in companies already with a big market cap. I said market cap increasing is the ROI to investors (besides dividends, which is mostly irrelevant when investing in tech companies).
That's exactly how startups work. You invest at a cap of $50m. Its market cap increases to $1 billion. You just made 20x (provided you have liquidity).
if there are 100 shares in a company, but 2 people own them. person1 has 99 shares. person2 has 1 share. person1 sells their share for $100 to person3.
Does that mean the market cap is worth $10k? yes. is that meaningful if there are no other buyers at $100? no.
So yes, Nvidia's market cap is meaningful. If the reported market cap is at $3.5t and you want to sell your shares, you can easily find someone else who values Nvidia at $3.5t to buy them.
Also diversified in ownership. Huang is at less than 4%, big institutions are also at less than 10%. So good chunk of the stock should be some level of liquidity at reasonable premium.
Just because it’s publicly traded doesn’t change anything.
There might be no buyers. There might be no buyers willing to buy at the last sold price. There might be no sellers willing to sell at the last sold price.
Market cap would be $10,000, but if there isn’t a single person willing to buy for that price, then is it worth that much,
It’s a publicly traded company with very high liquidity (tens of billions being traded each day). The market cap is based on the price that other buyers and sellers are bidding. This hypothetical of there being no other buyers simply doesn’t apply.
I get that they’re selling huge amounts of hardware atm, but I feel like this is entirely due to the hype train that is BS generators.
I have not encountered any of the aggressively promoted use cases to be better than anything they replaced, and all the things people seem to choose to use seem of questionable long term value.
I can’t help but feel that this nonsense bubble is going to burst and a lot of this value is going to disappear.
In document recognition they’re going to replace everything that came before. A couple of years ago you needed a couple of ML experts to setup, train and refine models that could parse through things like contracts, budgets, invoices and what not to extract key info that needed to be easily available for the business.
Now you need someone semi-proficient in Python who knows enough about deployment to get a local model running. Or alternatively skills to connect to some form of secure cloud LLM like what Microsoft peddles.
For us it meant that we could cut the work from 6-12 months to a couple of weeks for the initial deployment. And from months to days for adding new document types. It also meant we need one inexpensive employee for maybe 10% or their total time, where we needed a couple of expensive full time experts before. We actually didn’t have the problem with paying the experts, the real challenge was finding them. It was almost impossible to attract and keep ML talent because they had little interest in staying with you after the initial setups, since refining, retuning and adding new document types is “boring”.
As far as selling hardware goes I agree with you. Even if they have the opportunity to sell a lot right now it must be a very risk filled future. Local models can do quite a lot on very little computation power, and it’s not like a lot of use cases like our document one need to process fast. As long as it can get all our incoming documents done by the next day, maybe even by next week, it’ll be fine.
Weirdly enough for my next "AI hardware" purchase, I'm waiting for the release of the M4 Ultra to max out on VRAM since Nvidia's chips are highly overpriced and using GPU RAM to price gouge the market.
Since M4 Max allows 128GB RAM I'm expecting M4 Ultra to max out at 256GB RAM. Curious what x86's best value get consumer hardware with 256GB GPU RAM. I've noticed tinygrad offers a 192 GPU RAM setup for $40K USD [1], anything cheaper?
The trough of disillusionment is going to be frightening, and I believe we're starting to see the first signs of it (with Apple's paper, ironically enough).
Not sure which paper you are referring to. Would you be happy to share it for those of us that seemingly have missed it?
This one? https://news.ycombinator.com/item?id=42024539
Thanks, that is probably the one. Here is a link to the paper itself on arXiv:
https://arxiv.org/abs/2410.05229
Paper as in stocks. Traditionally stock certificates were printed on paper.
Apples share price has taken a bit of a fall this week.
It's a part of the hype cycle.
https://en.m.wikipedia.org/wiki/Gartner_hype_cycle
I'm not sure how accurate it is overall, but there are companies that fit it.
Not ready
I really wonder how the future tariffs are going to shape the AI industry. Are we going to see huge AI clusters being hosted overseas? Will it eat into Nvidia's bottom line or will consumers just eat the price increase?
People are paying for the software, which is made here in the US, and also not subject to tariffs per se. Same could be said about Apple too - you are buying the iPhone for its software, that is what you are paying for, and the software is made here in the US. The customers, the company and the country enjoy the best surpluses in these scenarios. The tariff is like a glorified tax, it's not going to change much.
I am guessing countries subject to tariffs will retaliate by making FANG's life hard.
AI pirates are probably going to be a big thing, but it will not because of tarrifs, they will be running illegal models.
Illegal models! I'm sure there will be more regulation to come but the law has already struggled a lot with the internet and the transactions through it. I wonder how much trial and error this wave will cause.
Tariffs hike plan will end up the same as Trump's health plan, coming in two weeks, for 4 years.
I it’s not like he shied away from them before. I think he’s starting with a high number to force concessions from China, but we’ll still have some amount of more broad tariffs in the end.
Our health system is so screwed up it’s no wonder they can’t make anything happen. There are only two paths to really fix it, and both would cause a lot of short term issues.
Trump is stupid and insane and a coward. But I don’t think he’s so stupid enough to put such egregious tariffs up. Maybe someone can explain how the economy works to him like a 5-year old and maybe he’ll get it.
I think he mentioned the tariffs to try the election. I don't think he's dumb enough to raise tariffs on every goods which lead to inflation which lead to low approval ratings.
Apple has an entire diversified product roadmap and ecosystem. Nvidia has a gpu. I don’t see longevity for Nvidia.
Nvidia also has CUDA, which has surpassed all rival attempts at GPU programming frameworks in the last 2 decades. I remember learning OpenCL and rooting for it to become the standard but couldn't find a single job for that role.
CUDA is nice, but it's not a moat. Rocm exists, and even creating a totally new AI computer API is not that far-fetched.
CUDA is totally a moat. ROCM exists but AMD is like 100k engineering years behind NVDA just in terms of how much time NVDA had to invest into CUDA and all the ecosystem around it.
need dataframes and pandas? cuDF. need compression/decompression? nvcomp. need vector search? cuVS. and the list goes on and on and on.
> need dataframes and pandas? cuDF. need compression/decompression? nvcomp. need vector search? cuVS. and the list goes on and on and on.
Sure, but that doesn’t mean I’m going to pay a billion extra for my next cluster – a cluster that just does matrix multiplication and exponentiation over and over again really fast.
So I’d say CUDA is clearly a moat for the (relatively tiny) GPGPU space, but not for large scale production AI.
I’ve been using GPU accelerated computing for more than 10 years now. I write CUDA kernels in my sleep. I have seen 10s of frameworks and 10s of chip companies come and go.
I have NEVER even HEARD of Rocm, and neither has anyone in the GPU programming slack group I just asked.
CUDA is absolutely a moat.
Take away iPhone and Apple is worth at best 1/10 of its current market cap. The company isn't as diversified as you think.
AirPods alone generate five times as much revenue as Spotify, and about two-thirds as much as Netflix. Apple is also the world’s biggest watchmaker.
When some of Apple’s side-quests are industry giants in their own right, I think it’s fair to say that Apple are diversified.
Ever tried using apple watch without an iPhone?
Airpods are only popular because of the iPhone.
That's true but at the same time these are mostly I phones accessories. Should the iPhone fall out of fashion (which I don't expect but it's a thought experiment) then the sales of these products will decline as well, so is that really diversification?
Apple has the iPhone. It’s very concentrated in the iPhone. You can say services but services are powered by the iPhone as well.
But there are people who owns those iPhones, and many of them care that it’s an Apple iPhone. When you use ChatGPT, do you care if the matrix multiplications were done on genuine Nvidia hardware, or custom OpenAI ASICs?
In gaming, people do care if it's Nvidia or AMD. They want Nvidia.
For AI, I assume enterprise do care if it's Nvidia. Right now, Nvidia is in the "no one ever got fired for buying Nvidia" camp. You can buy AMD to save a few dollars, run into issues, and get fired.
a gpu is all you need if you’re one of two companies worldwide that makes them
Especially if you're the only one that makes GPUs for AI use
(deleted because I was wrong)
> Do they even do any R&D anymore?
Yes.
“Apple annual research and development expenses for 2024 were $31.37B, a 4.86% increase from 2023. Apple annual research and development expenses for 2023 were $29.915B, a 13.96% increase from 2022. Apple annual research and development expenses for 2022 were $26.251B, a 19.79% increase from 2021”
— https://www.macrotrends.net/stocks/charts/AAPL/apple/researc...
R&D spending is basically a lie at most tech companies because of how the tax grants for R&D spending work.
> R&D spending is basically a lie
Chips, for one, don’t research themselves: https://www.apple.com/uk/newsroom/2023/03/apple-accelerates-...
Software developers are consider the D in R&D, much of it is just engineering salary cost since they are a tech company.
Regarding R&D: They have quite a few reports integrating their products with health.
Ex: https://news.ycombinator.com/item?id=41491121
https://news.ycombinator.com/item?id=41948739
Tested by community: https://news.ycombinator.com/item?id=41799324
https://news.ycombinator.com/item?id=42019694
VR was a 10 year or so project but agreed no product-market fit yet.
Could you please not delete your comment in the future, even if you are wrong, but maybe just an edit stating that?
It would make reading the thread easier ..
But here's the truth: there is no average user. Or rather, the average user is not representative of the distribution of users. If you build for average, you build for no one. It is hard to invent things, so use the power of scale. It is literally at your fingertips if you want it. Take advantage of the fact that you have a cash cow. That means you can take risks, that you can slow down and make sure you are doing things right. You're not going to die tomorrow if you don't ship, you can take on hard problems and *really* innovate. But you have to take off the chains. Yes, powerful tools are scary, but that doesn't mean you shouldn't use them.
> the average user is not representative of the distribution of users.
What does this mean? Just thinking about iPhones: As of September 2024, there are an estimated 1.382 billion active iPhone users worldwide, which is a 3.6% increase from the previous year. In the United States, there are over 150 million active iPhone users.
Are you math inclined? This is easier to explain with math (words) but I can put it in more English if you want.
If you're remotely familiar with high dimensional statistics, one of the most well known facts is that the density of a normal ball lies on the shell while the uniform ball is evenly distributed. Meaning if you average samples of a normal ball, the result is not representative of the samples. The average is inside the ball, but remember, all the sampling comes from the shell! It is like drawing a straight line between two points on a basketball, the middle of that line is going to be air, not rubber. But if you do for a uniform ball, it is. That's the definition of uniform... Understanding this, we know that users preference is not determined by a single thing, and honestly, this fact becomes meaningful when we're talking like 5 dimensions...[0]. This fact isn't just true for normal balls, it is true for any distribution that is not uniform.
To try to put this is more English: there are 1.382 billion active iPhone users world wide. They come from nearly 200 countries. The average person in Silicon Valley doesn't want the same thing as the average person in Fresno California. Do you think the average person in Japan wants the same thing as the average Californian? The average American? The average Peruvian? Taste and preference vary dramatically. You aren't going to make a meal that everyone likes, but if you make a meal with no flavor, at least everyone will eat it. What I'm saying is that if you try to make something for everyone, you make something with no flavor, something without any soul. The best things in life are personal. The things you find most enjoyable are not always going to be what your partner, your best friends, your family, or even your neighbor finds most enjoyable. We may have many similarities, but our differences are the spice of life, they are what make us unique. It is what makes us individuals. We all wear different size pants, why would you think we'd all want to put the same magic square in our pockets (if we even have pockets). We can go deeper with the clothing or food analogy, but I think you get that a chef knows how to make more than one dish and a clothing designer knows you need to make more than one thing in different sizes and colors.
[0] https://stats.stackexchange.com/a/20084
> Nvidia has a gpu.
In addition to the GPUs (which they invented) that Nvidia designs and manufactures for gaming, cryptocurrency mining, and other professional applications, the company also creates chip systems for use in vehicles, robotics, and other tools.
Okay, but $22.6B of their $26B revenue this quarter was from datacenter GPUs.
The only reason they are this big right now is because they are selling H100s, mostly to other big tech companies.
Really? Nvidia’s marketing and PR teams are trying to trick people into thinking that they invented GPUs? Does Nvidia have no shame and or scruples?
The term "GPU" was coined by Sony in reference to the 32-bit Sony GPU (designed by Toshiba) in the PlayStation video game console, released in 1994.
https://en.wikipedia.org/wiki/Graphics_processing_unit
It is technically correct if you take "GPU" to mean "capable of doing 2D and 3D graphics on a single chip." But I think that's hair-splitting to the point of advertisement: the older 3dfx Voodoo was a GPU that handled the 3D math while the video card itself handled the 2D math. And of course that 2D video card itself had a unit which processed graphics! "Both in one chip" is an important milestone but it's pretty arbitrary as a definition of GPU.
It really depends on what you want to call a "GPU"
The device in the PS1 has also been referred to as a "Geometry Transfer Engine"
You can see it's features and specs here: https://en.m.wikipedia.org/wiki/PlayStation_technical_specif...
Some may say that it is not a "real GPU" or certain features (like 3d) are missing to make it one.
The Nvidia claim is for the GeForce 256 released in 99.
This makes me wonder if our grandkids will be debating on what the first "real AI chip" was - would it be what we call a GPU like the H100 or will a TPU get that title?
That is just pedantic and a bit disingenuous. 2D/3D accelerators existed before 1999.
I actually had one of these cards; https://en.wikipedia.org/wiki/S3_ViRGE
It sucked but it was technically one of the first "GPU"s.
Also let us not forget 3dfx and the Voodoo series cards.
Don't let Nvidia rewrite history please.
I agree, but a more deeper question in my post was when did the old technology evolve into what we call a GPU today?
I don't want to rewrite history either.
It's partially telling that you write "2D/3D accelerators" which means that was a different class of thing - if they were a full GPU then you would have called them as such.
My point being - what defines what a GPU is? Apparently there were things called GTEs, accelerators, and so on. Some feature or invention crossed the line for us to label them as GPUs.
Just like over the last ~10 years we have seen GPUs losing graphical features and picking up NN/AI/LLM stuff to the point we now call these TPUs.
Will the future have confusion over the first ~AI CHIP~? Some conversation like
"Oh technically that was a GPU but it has also incorporated tensor processing so by todays standards it's an AI CHIP."
> It's partially telling that you write "2D/3D accelerators" which means that was a different class of thing
It's because that's what they were called at the time. Just because some one calls a rose a different name doesn't mean it doesn't smell the same.
What defines a GPU? it's a compute unit that process graphics, yes it is that simple. There were many cards that did this before Nvidia.
An A.I. chip is just a tensor processing unit a TPU, this not that hard to grasp, I think, in my opinion.
> What defines a GPU? it's a compute unit that process graphics, yes it is that simple
But you originally declared the Sony chip as the first GPU. There were many things that processed graphics before that, as you have declared. Apparently going back to the Amiga in the 70s.
It is this muddyness with retroactively declaring tech a certain kind that I'm questing here.
Apple once had just an iMac.
When was that? I remember Apple always having multiple products except at the very start.
Discussion (79 points, 12 days ago, 48 comments) https://news.ycombinator.com/item?id=41952389
[dupe] https://news.ycombinator.com/item?id=42055199
Is it scary that Huawei is competing with both, very quickly?
in what sense?
there isn't any competitor to the MacBook Pro and the M4.
there isn't any competitor to NVL based Blackwell racks. not even for the H100/H200.
so how do you think Huawei competes?
"largest" in terms of market valuation; so, not a very substantial measure.
For public companies, market cap is the most important metric. So it’s the most substantial measure to me.
Why is market cap the most important metric? It does not exist in a void. Market cap + P/E or forward P/E, P/B, P/S, etc. are all metrics of high import.
Market cap is one piece of a complicated picture. On its own, you don't know too much.
When you're evaluating your ROI on tech stocks, the #1 factor is market cap delta between when you bought the shares and now.
And how much money do you make from trading large cap US equities?
Market cap is kind of made up. The stock is worth what people believe it to be worth.
Profit is the number that really matters.
Profit and expected future profit (based on growth rate) is one of the ways a (non publicly traded) company is valued.
Theoretically that's how price targets are created by analysts. So market cap is explainable through that metric.
Profit this year and no growth vs. the same profit this year and double profits next year should lead to different market caps, and it generally does.
>Market cap is kind of made up.
For a public company, it's all about return on investment, right? Market cap is the most important factor in return on investment right? Therefore, market cap is the most important measure.
> Market cap is the most important factor in return on investment right?
Completely wrong. Hint: You are on HN. Startups obviously don't start at a large market cap.
I'm not wrong. You just interpreted me wrong.
I didn't say you should invest in companies already with a big market cap. I said market cap increasing is the ROI to investors (besides dividends, which is mostly irrelevant when investing in tech companies).
That's exactly how startups work. You invest at a cap of $50m. Its market cap increases to $1 billion. You just made 20x (provided you have liquidity).
Oh, I should only invest in companies with the biggest market caps?
Sure, if you think they can increase their market caps by the highest percentage and have enough liquidity for you to exit if you wish.
if there are 100 shares in a company, but 2 people own them. person1 has 99 shares. person2 has 1 share. person1 sells their share for $100 to person3.
Does that mean the market cap is worth $10k? yes. is that meaningful if there are no other buyers at $100? no.
Luckily, Nvidia has no such liquidity issues given that it's the most traded stock in the world.
https://finance.yahoo.com/markets/stocks/most-active/
So yes, Nvidia's market cap is meaningful. If the reported market cap is at $3.5t and you want to sell your shares, you can easily find someone else who values Nvidia at $3.5t to buy them.
Not sure how this applies bere. It's a publicly traded company, so the price is literally what the in buyers are willing to pay.
Also diversified in ownership. Huang is at less than 4%, big institutions are also at less than 10%. So good chunk of the stock should be some level of liquidity at reasonable premium.
Just because it’s publicly traded doesn’t change anything.
There might be no buyers. There might be no buyers willing to buy at the last sold price. There might be no sellers willing to sell at the last sold price.
Market cap would be $10,000, but if there isn’t a single person willing to buy for that price, then is it worth that much,
Of course it changes it. The market cap you see is based on up to date trade clearing prices.
It’s a publicly traded company with very high liquidity (tens of billions being traded each day). The market cap is based on the price that other buyers and sellers are bidding. This hypothetical of there being no other buyers simply doesn’t apply.
Like Tesla but bonkers
Need I remind you who runs Tesla? Tesla is already bonkers.
Isn't that a different kind of bonkers though?
I get that they’re selling huge amounts of hardware atm, but I feel like this is entirely due to the hype train that is BS generators.
I have not encountered any of the aggressively promoted use cases to be better than anything they replaced, and all the things people seem to choose to use seem of questionable long term value.
I can’t help but feel that this nonsense bubble is going to burst and a lot of this value is going to disappear.
In document recognition they’re going to replace everything that came before. A couple of years ago you needed a couple of ML experts to setup, train and refine models that could parse through things like contracts, budgets, invoices and what not to extract key info that needed to be easily available for the business.
Now you need someone semi-proficient in Python who knows enough about deployment to get a local model running. Or alternatively skills to connect to some form of secure cloud LLM like what Microsoft peddles.
For us it meant that we could cut the work from 6-12 months to a couple of weeks for the initial deployment. And from months to days for adding new document types. It also meant we need one inexpensive employee for maybe 10% or their total time, where we needed a couple of expensive full time experts before. We actually didn’t have the problem with paying the experts, the real challenge was finding them. It was almost impossible to attract and keep ML talent because they had little interest in staying with you after the initial setups, since refining, retuning and adding new document types is “boring”.
As far as selling hardware goes I agree with you. Even if they have the opportunity to sell a lot right now it must be a very risk filled future. Local models can do quite a lot on very little computation power, and it’s not like a lot of use cases like our document one need to process fast. As long as it can get all our incoming documents done by the next day, maybe even by next week, it’ll be fine.
Weirdly enough for my next "AI hardware" purchase, I'm waiting for the release of the M4 Ultra to max out on VRAM since Nvidia's chips are highly overpriced and using GPU RAM to price gouge the market.
Since M4 Max allows 128GB RAM I'm expecting M4 Ultra to max out at 256GB RAM. Curious what x86's best value get consumer hardware with 256GB GPU RAM. I've noticed tinygrad offers a 192 GPU RAM setup for $40K USD [1], anything cheaper?
[1] https://tinygrad.org/#tinygrad