Mark Zuckerberg freezes AI hiring amid bubble fears

(telegraph.co.uk)

656 points | by pera 13 hours ago ago

658 comments

  • khoury 13 hours ago
  • tracker1 3 hours ago

    I'm somewhere in the middle on this, with regards to the ROI... this isn't the kind of thing where you see immediate reflection on quarterly returns... it's the kind of thing where if you don't hedge some bets, you're likely to completely die out from a generational shift.

    Facebook's product is eyeballs... they're being usurped on all sides between TikTok, X and BlueSky in terms of daily/regular users... They're competing with Google, X, MS, OpenAI and others in terms of AI interactions. While there's a lot of value in being the option for communication between friends and family, and the groups on FB don't have a great alternative, the entire market can shift greatly depending on AI research.

    I look at some of the (I think it was OpenAI) in generated terrain/interaction and can't help but think that's a natural coupling to FB/Meta's investments in their VR headsets. They could potentially completely lose on a platform they largely pioneered. They could wind up like Blackberry if they aren't ready to adapt.

    By contrast, Apple's lack of appropriate AI spending should be very concerning to any investors... Google's assistant is already quite a bit better than Siri and the gap is only getting wider. Apple is woefully under-invested, and the accountants running the ship don't even seem to realize it.

    • daheza 3 hours ago

      I think apple is fine. When AI works without 1 in 5 hallucinations then it can be added to their product. Showing up late with features that exists elsewhere but are polished in apple presentation is the way.

      • rafaelmn an hour ago

        Have you used Siri recently ? It's actually amazing how it can be crap at tasks consistently considering underlying tech. 1 in 5 hallucinations would be a welcome improvement.

        Using ChatGPT voice mode and Siri makes Siri feel like a legacy product.

        • rTX5CMRXIfFG 8 minutes ago

          I don’t think that’s the point. Yes, Siri is crap, but Apple is already working on integrating LLMs at the OS level and those are shipping soon. It’s a quick fix to catch up in the AI game, but considering their track record, they’re likely to eventually retire third party partnerships and vertically integrate with their own models in the future. The incentive is there—doing so will only boost their stock price.

      • makeitdouble an hour ago

        In general I don't think Google or Apple need AI.

        In practice though their platform is closed to any other assistant than theirs, so they have to come up with a competent service (basically Ben Thomson's "strategy tax" playing in full)

        That question will be moot the day Apple allows other companies to ingest everything's happening on device and operate the whole device in reaction to user's requests, and some company actually does a decent job at it.

        Today Google is doing a decent job and Apple isn't.

    • malfist 3 hours ago

      How many years of not seeing returns this quarter does it take before its all hype?

      • tracker1 2 hours ago

        How long did it take Space-X to catch a rocket with giant chopsticks?

        It's more than okay for a company with other sources of revenue to do research towards future advancement... it's not risking the downfall of the company.

      • throw310822 an hour ago

        "technology too expensive to be offered at a profit (yet)" != hype

    • mvdtnz 23 minutes ago

      > they're being usurped on all sides between TikTok, X and BlueSky

      Good grief. Please leave your bubble once or twice in a month.

      Tiktok yes. X and Bluesky, absolutely not.

      • tracker1 a few seconds ago

        Monthly active users:

        From DemandSage:

            Facebook - 12 billion!?
            TikTok - 1.59 billion
            X - 611 million
            Bsky - 38 million
        
        That's according to DemandSage ... I'm not sure I can trust the numbers, FB jumped up from around 3b last year, which again I don't trust. 12b is more than the global population, so it's got to be all bots. And even the 3b number is hard to believe (at close to half the global population), no idea how much of the population of earth has any internet access.

        From Grok:

            Facebook - 3.1 billion
            TikTok - 1.5-2 billion
            X - 650 million
            Bsky - 4.1 million
        
        Looks like I'm definitely in a bubble... I tend to interact 1:1 as much on X as Facebook, which is mostly friends/family and limited discussions in groups. A lot of what I see on feeds is copy/pasta from tiktok though.

        That said, I have a couple friends who are die hard on Telegram.

  • seu 7 hours ago

    These changes in direction (spending billions, freezing hiring) over just a few months show that these people are as clueless as to what's going to happen with AI, as everyone else. They just have the billions and therefore dictate where money goes, but that's it.

    • awalsh128 5 hours ago

      This is why I ignore anything that CEOs say about AI in the news. Examples, AGI in a few years, most jobs will be obsolete, etc.

    • thefourthchime 2 hours ago

      That's one interpretation, but nobody really knows. It's also possible that they got a bunch of big egos in a room and decided they didn't need any more until they figured out how to organize things.

    • toephu2 2 hours ago

      The media always says AI is the biggest technological change of our lifetime.. I think it was the internet actually.

      • postalrat 4 minutes ago

        I believe it's the biggest change since the Internet but what will be bigger will probably remain subjective.

    • jiveturkey 4 hours ago

      you think folks that have experience managing this much money/resources (unlike yourself) are clueless? more likely it's 4D chess.

      • jamwil 3 hours ago

        Just like the metaverse?

        • jiveturkey 2 hours ago

          sometimes you lose

          • dude250711 an hour ago

            Like everything but acquisitions since the original product?

        • goyagoji 2 hours ago

          Anti-aging startups is 5D chess, the 4th dimension is the most fickle so it's very hard to make it to a 4D intercept when your ideas are stupid.

      • AngryData an hour ago

        Yes, yes I do. How much practical experience does someone with billions of dollars have with the average person, the average employee, the average job, and the kind of skills and desires that normal people possess? How much does today's society and culture and technology resemble society even just 15 years ago? Being a billionaire allows them to put themselves into their own social and cultural bubble surrounded by sycophants.

  • Hilift 10 hours ago

    META has only made $78.7 billion operating income in the past 12 months of returns. Time to buckle up!

    https://finance.yahoo.com/quote/META/financials/

    • ipnon 9 hours ago

      It's really difficult to wrap one's head around the cash they're able to deploy.

    • hinkley 3 hours ago

      23:1 P/E. Not Tesla levels of stupidity but still high for a mature company.

    • nashashmi 10 hours ago

      An astonishing number

    • stripe_away 9 hours ago

      how does this compare to the depreciation cost of their datacenters?

      • dh2022 7 hours ago

        The financials from the link to not specifically call out Depreciation Expense. But Operating Income should take into account Depreciation Expense.

        The financials have a line below Net Income Line called "Reconciled depreciation" with about $16.7 billion. I do not know what that means (maybe this is how they get to the EBITDA metric) but maybe this is the metric you are looking for.

      • Hilift 3 hours ago

        Most of the operating expenses seem to be in the $13 billion "R&D" spend on the Q2 2025 statement.

        https://pbs.twimg.com/media/GxIeCe7bkAEwXju?format=jpg&name=...

    • almostgotcaught 8 hours ago

      385 comments based on a clickbait headline from telegraph (you know that sophisticated tech focused newspaper...)

  • alsetmusic 10 hours ago

    It almost seems as though the record-setting bonuses they were dolling out to hire the top minds in AI might have been a little shortsighted.

    A couple of years ago, I asked a financial investment person about AI as a trick question. She did well by recommending investing in companies that invest in AI (like MS) but who had other profitable businesses (like Azure). I was waiting for her to put her foot in her mouth and buy into the hype.She skillfully navigated the question in a way that won my respect.

    I personally believe that a lot of investment money is going to evaporate before the market resets. What we're calling AI will continue to have certain uses, but investors will realize that the moonshot being promised is undeliverable and a lot of jobs will disappear. This will hurt the wider industry, and the economy by extension.

    • miki123211 9 hours ago

      I have an overwhelming feeling that what we're trying to do here is "Netflix over DialUp."

      We're clearly seeing what AI will eventually be able to do, just like many VOD, smartphone and grocery delivery companies of the 90s did with the internet. The groundwork has been laid, and it's not too hard to see the shape of things to come.

      This tech, however, is still far too immature for a lot of use cases. There's enough of it available that things feel like they ought to work, but we aren't quite there yet. It's not quite useless, there's a lot you can do with AI already, but a lot of use cases that are obvious not only in retrospect will only be possible once it matures.

      • bryanlarsen 9 hours ago

        Some people even figured it out in the 80's. Sears founded and ran Prodigy, a large BBS and eventually ISP. They were trying to set themselves up to become Amazon. Not only that, Prodigy's thing (for a while) was using advertising revenue to lower subscription prices.

        Your "Netflix over dialup" analogy is more accessible to this readership, but Sears+Prodigy is my favorite example of trying to make the future happen too early. There are countless others.

        • tombert 8 hours ago

          Today I learned that Sears founded Prodigy!

          Amazing how far that company has fallen; they were sort of a force to be reckoned with in the 70's and 80's with Craftsman and Allstate and Discover and Kenmore and a bunch of other things, and now they're basically dead as far as I can tell.

          • kens 4 hours ago

            On the topic of how Sears used to be high-tech: back in 1981, when IBM introduced the IBM PC, it was the first time that they needed to sell computers through retail. So they partnered with Sears, along with the Computerland chain of computer stores, since Sears was considered a reasonable place for a businessperson to buy a computer. To plan this, meetings were held at the Sears Tower, which was the world's tallest building at the time.

            • duderific 20 minutes ago

              Wow, I hadn't thought about Computerland for quite a while. That was my go-to to kill some time at the mall when I was a teen.

            • NordSteve 2 hours ago

              Bought my IBM PC from Sears back in the day. Still have the receipt.

              • zenonu an hour ago

                Worthy if its own hacker news post. Would love to see it.

          • dh2022 8 hours ago

            My favorite anecdote about Sears is from Starbucks current HQs - the HQs used to be a warehouse for Sears. Before renovation the first floor walls next to the elevators used to have Sears' "commitment to customers" (or something like that).

            To me it read like it was written by Amazon decades earlier. Something about how Sears promises that customers will be 100% satisfied with the purchase, and if for whatever reason that is not the case customers can return the purchase back to Sears and Sears will pay for the return transportation charges.

            • tombert 7 hours ago

              Craftsman tools have almost felt like a life-hack sometimes; their no-questions-asked warranties were just incredible.

              My dad broke a Craftsman shovel once that he had owned for four years, took it to Sears, and it was replaced immediately, no questions asked. I broke a socket wrench that I had owned for a year and had the same story.

              I haven't tested these warranties since Craftsman was sold to Black and Decker, but when it was still owned by Sears I almost exclusively bought Craftsman tools as a result of their wonderful warranties.

              • mindcrime 7 hours ago

                FWIW, I bought a Craftsman 1/4" drive ratchet/socket set at a Lowes Home Improvement store last year, and when I got it home and started messing with it, the ratchet jammed up immediately (before even being used on any actual fastener). I drove back over there the next day and the lady at the service desk took a quick look, said "go get another one off the shelf and come back here." I did, and by the time I got back she'd finished whatever paperwork needed to be done, handed me some $whatever and said "have a nice day."

                Maybe not quite as hassle free as in years past, but I found the experience acceptable enough.

                • tracker1 3 hours ago

                  I think that's as much about Lowes as it is Craftsman... I don't think Craftsman tools have been particularly well build, just that they had and are able to have enough margins to have a no questions asked policy... it probably helps that a lot of the materials are completely and easily recyclable.

                • projektfu 2 hours ago

                  It made sense to use the Craftsman screwdriver as a pry bar in a pinch and save the really good one for just turning screws.

              • lostlogin 5 hours ago

                > My dad broke a Craftsman shovel once that he had owned for four years, took it to Sears, and it was replaced immediately, no questions asked. I broke a socket wrench that I had owned for a year and had the same story.

                This is covered by consumer protection laws in some places. 4 years on a spade would be pushing it, but I’d try with a good one. Here in New Zealand it’s called ‘The Consumer Guarantees Act’. We pay more at purchase time, but we do get something for it.

              • ssl-3 5 hours ago

                Lots of tools have lifetime warranties. Harbor Freight's swap process is probably fastest, these days, for folks with one nearby. Tekton's process is also painless, but slower: Send them a photo of the broken tool, and they deliver a new tool to your door.

                But I'm not old enough to remember a time when lifetime warranties were unusual. In my lifetimes, a warranty on handtools has always seemed more common than not outside of the bottom-most cheese-grade stuff.

                I mean: The Lowes house-brand diagonal cutters I bought for my first real job had a lifetime warranty.

                And before my time of being aware of the world, JC Penney sold tools with lifetime warranties.

                (I remember being at the mall with my dad when he took a JC Penney-branded screwdriver back to JC Penney -- probably 35 years ago.

                He got some pushback from people who insisted that they had never sold tools, and then from people who insisted that they never had warranties, and then he finally found the fellow old person who had worked there long enough to know what to do. Without any hesitation at all, she told us to walk over to Sears, buy a similar Craftsman screwdriver, and come back with a receipt.

                So that's what we did.

                She took the receipt and gave him his money back.

                Good 'nuff.)

              • kjkjadksj 7 hours ago

                Harbor freight is like that too.

            • jimbokun 3 hours ago

              The Sears Catalog was the Amazon of its day.

          • gcanyon 3 hours ago

            :-) Then it's going to blow your mind that CompuServe (while not founded by them) was a product of H&R Block.

          • esaym 4 hours ago

            There were quite a few small ISP's in the 1990's. Even Bill Gothard[0] had one.

            [0]https://web.archive.org/web/19990208003742/http://characterl...

            • hollerith 4 hours ago

              Prodigy predates ISPs (internet service providers). Before the web had matured a little in 1993 the internet was too technically challenging to interest most consumers except maybe for email, and Prodigy was formed in 1984 -- and although it offered email, it was walled-garden email: a Prodigy user could not exchange email with the internet till the mid-1990s at which time Prodigy might have become an ISP for a few years before going out of business.

            • tombert an hour ago

              At a previous job I worked under a guy who started his own ISP in the early 90’s. I would have loved to have been part of that scene but I was only like four when that happened.

          • htrp 8 hours ago

            Blame short sighted investors asking Sears to "focus"

            • dehrmann 8 hours ago

              They weren't wrong. Its core business in what is still a viable-enough sector collapsed. Or if it were truly well-managed, running an ISP and a retailer should have been enough insight to be Amazon.

              • svnt 7 hours ago

                Timing is a difficult variable.

              • KerrAvon 6 hours ago

                It wasn't possible for them to be well managed at the time it mattered. Sears was loaded with debt by private equity ghouls; same story for almost all defunct brick and mortar businesses; Amazon was a factor, but private equity is what actually destroyed them.

                • andrew_lettuce 3 hours ago

                  Thank you for bringing this up. Sears really didn't have a choice, they were a victim of the most pernicious LBO, Gordon Gecko-style strip mining nonsense on the PE spectrum. All private equity is not the same but after seeing two PE deals from the inside (one a leveraged buy out) and another VC one with the "grow at insane place" playbook I think I prefer the naked and aligned greed of the VC model; PE destroyed both of the other companies while the VC one was already doomed.

                • frmersdog 4 hours ago

                  And, knowing Jeff Bezos' private equity origins, one could be forgiven for entertaining the thought that none of this was an accident. Just don't be an idiot and, you know, give voice to that thought or anything.

                  • chollida1 2 hours ago

                    > And, knowing Jeff Bezos' private equity origins

                    He doesn't have private equity origins as far as I know. He came from DE Shaw, a very well respected and long running hedge fund.

                  • ProjectArcturis 3 hours ago

                    Are you suggesting that Jeff Bezos somehow convinced all his PE buddies to tank Sears (and their own loans to it) in order for him to build Amazon with less competition? Because, well, no offense, but that seems like a remarkably naive understanding of capital markets and individual motivations. Especially when it's well documented how Eddie Lampert's libertarian beliefs caused him to run it into the ground.

              • mikestew 5 hours ago

                They weren't wrong.

                Evidence suggests that maybe they were. "Focusing" obviously didn't work.

                But at the end of the day, it was private equity and the hubris of a CEO who wasn't nearly as clever as he'd like to have thought he was.

          • matthewn 2 hours ago

            For more on this -- and how Sears had everything it needed (and more) to be what Amazon became -- see this comment from a 2007 MetaFilter thread: https://www.metafilter.com/62394/The-Record-Industrys-Declin...

            • fragmede an hour ago

              The untold story, is the names of individuals fighting office politics that lead to that (not) happening.

        • djtango 3 hours ago

          This is a great example that I hadn't heard of and reminds me of when Nintendo tried to become an ISP when they built the Family Computer Network System in 1988

          A16Z once talked about the scars of being too early causes investors/companies to get fixed that an idea will never work. Then some new younger people who never got burned will try the same idea and things will work.

          Prodigy and the Faminet probably fall into that bucket along with a lot of early internet companies where they tried things early, got burned and then possibly were too late to capitalise when it was finally the right time for the idea to flourish

          • mschuster91 2 hours ago

            Reminds me of Elon not taking a no for an answer. He did it twice, with a massive success.

            A true shame to see how he's completely lost track with Tesla, the competition particularly from China is eating them alive. And in space, it's a matter of years until the rest of the world catches up.

            And now, he's ran out of tricks - and more importantly, on public support. He can't pivot any more, his entire brand is too toxic to touch.

        • tracker1 3 hours ago

          On the flip side, they didn't actually learn that lesson... that it was a matter of immature tech with relatively limited reach... by the time the mid-90's came through, "the internet is just a fad" was pretty much the sentiment from Sears' leadership...

          They literally killed their catalog sales right when they should have been ramping up and putting it online. They could easily have beat out Amazon for everything other than books.

        • cyanydeez 3 hours ago

          the problem is ISP became a Utility, not some fountain of unlimited growth.

          What you're arguing is that AI is fundamentally going to be a utility, and while that's worth a floor of cash, it's not what investors or the market clamor for.

          I agree though, it's fundamentally a utility, which means theres more value in proper government authority than private interests.

        • outside1234 8 hours ago

          Newton at Apple is another great one, though they of course got there.

      • deegles 5 hours ago

        > We're clearly seeing what AI will eventually be able to do

        Are we though? Aside from a narrow set of tasks like translation, grammar, and tone-shifting, LLMs are a dead end. Code generation sucks. Agents suck. They still hallucinate. If you wouldn't trust its medical advice without review from an actual doctor, why would you trust its advice on anything else?

        Also, the companies trying to "fix" issues with LLMs with more training data will just rediscover the "long-tail" problem... there is an infinite number of new things that need to be put into the dataset, and that's just going to reduce the quality of responses.

        For example: the "there are three 'b's in blueberry" problem was caused by so much training data in response to "there are two r's in strawberry". it's a systemic issue. no amount of data will solve it because LLMs will -never- be sentient.

        Finally, I'm convinced that any AI company promising they are on the path to General AI should be sued for fraud. LLMs are not it.

        • hnfong 4 hours ago

          I have a feeling that you believe "translation, grammar, and tone-shifting" works but "code generation sucks" for LLMs because you're good at coding and hence you see its flaws, and you're not in the business of doing translation etc.

          Pretty sure if you're going to use LLMs for translating anything non-trivial, you'd have to carefully review the outputs, just like if you're using LLMs to write code.

          • deegles 2 hours ago

            You know, you're right. It -also- sucks at those tasks because on top of the issue you mention, unedited LLM text is identifiable if you get used to its patterns.

          • mdemare 4 hours ago

            Exactly. Books are still being translated by human translators.

            I have a text on my computer, the first couple of paragraphs from the Dutch novel "De aanslag", and every few years I feed it to the leading machine translation sites, and invariably, the results are atrocious. Don't get me wrong, the translation is quite understandable, but the text is wooden, and the translation contains 3 or 4 translation blunders.

            GPT-5 output for example:

            Far, far away in the Second World War, a certain Anton Steenwijk lived with his parents and his brother on the edge of Haarlem. Along a quay, which ran for a hundred meters beside the water and then, with a gentle curve, turned back into an ordinary street, stood four houses not far apart. Each surrounded by a garden, with their small balconies, bay windows, and steep roofs, they had the appearance of villas, although they were more small than large; in the upstairs rooms, all the walls slanted. They stood there with peeling paint and somewhat dilapidated, for even in the thirties little had been done to them. Each bore a respectable, bourgeois name from more carefree days: Welgelegen Buitenrust Nooitgedacht Rustenburg Anton lived in the second house from the left: the one with the thatched roof. It already had that name when his parents rented it shortly before the war; his father had first called it Eleutheria or something like that, but then written in Greek letters. Even before the catastrophe occurred, Anton had not understood the name Buitenrust as the calm of being outside, but rather as something that was outside rest—just as extraordinary does not refer to the ordinary nature of the outside (and still less to living outside in general), but to something that is precisely not ordinary.

            • tschwimmer an hour ago

              Can you provide a reference translation or at least call out the issues you see with this passage? I see "far far away in the [time period]" which I should imagine should be "a long time ago" What are the other issues?

          • h4ck_th3_pl4n3t 2 hours ago

            By definition, transformers can never exceed average.

            That is the thing, and what companies pushing LLMs don't seem to realize yet.

            • janalsncm an hour ago

              Can you expand on this? For tasks with verifiable rewards you can improve with rejection sampling and search (i.e. test time compute). For things like creative writing it’s harder.

        • rstuart4133 3 hours ago

          > Aside from a narrow set of tasks like translation, grammar, and tone-shifting, LLMs are a dead end.

          I consider myself an LLM skeptic, but gee saying they are a "dead end" seems harsh.

          Before LLM's came along computers understanding human language was graveyard academics when to end their careers in. Now computers are better at it and far faster than most humans.

          LLM's also have an extortionary ability to distill and compress knowledge, so much so that you can download a model whose since is measured in GB, and it seems to have a pretty good general knowledge of everything of the internet. Again, far better than any human could do. Yes, the compression is lossy, and yes they consequently spout authoritative sounding bullshit on occasion. But I use them regardless as a sounding board, and I can ask them questions in plain English rather than go on a magical keyword hunt.

          Merely being able to understand language or having a good memory is not sufficient to code or do a lot else, on it's own. But they are necessary ingredients for many tasks, and consequently it's hard to imagine a AI that can competently code that doesn't have an LLM as a component.

          • deegles 2 hours ago

            > it's hard to imagine a AI that can competently code that doesn't have an LLM as a component.

            That's just it. LLMs are a component, they generate text or images from a higher-level description but are not themselves "intelligent". If you imagine the language center of your brain being replaced with a tiny LLM powered chip, you would not say it's sentient. it translates your thoughts into words which you then choose to speak or not. That's all modulated by consciousness.

      • me551ah 8 hours ago

        Or maybe not. Scaling AI will require an exponential increase in compute and processing power, and even the current LLM models take up a lot of resources. We are already at the limit of how small we can scale chips and Moore’s law is already dead.

        So newer chips will not be exponentially better but will be more of incremental improvements, so unless the price of electricity comes down exponentially we might never see AGI at a price point that’s cheaper than hiring a human.

        Most companies are already running AI models at a loss, scaling the models to be bigger(like GPT 4.5) only makes them more expensive to run.

        The reason why internet, smartphones and computers have seen exponential growth from the 90s is due to underlying increase in computing power. I personally used a 50Mhz 486 in the 90s and now use a 8c/16t 5Ghz CPU. I highly doubt if we will see the same form of increase in the next 40 years

        • mindcrime 7 hours ago

          Scaling AI will require an exponential increase in compute and processing power,

          A small quibble... I'd say that's true only if you accept as an axiom that current approaches to AI are "the" approach and reject the possibility of radical algorithmic advances that completely change the game. For my part, I have a strongly held belief that there is such an algorithmic advancement "out there" waiting to be discovered, that will enable AI at current "intelligence" levels, if not outright Strong AI / AGI, without the absurd demands on computational resources and energy. I can't prove that of course, but I take the existence of the human brain as an existence proof that some kind of machine can provide human level intelligence without needing gigawatts of power and massive datacenters filled with racks of GPU's.

          • fluoridation 6 hours ago

            If we suppose that ANNs are more or less accurate models of real neural networks, the reason why they're so inefficient is not algorithmic, but purely architectural. They're just software. We have these huge tables of numbers and we're trying to squeeze them as hard as possible through a relatively small number of multipliers and adders. Meanwhile, a brain can perform a trillion fundamental simultaneously because every neuron is a complete processing element independent of every other one. To bring that back into more concrete terms, if we took an arbitrary model and turned it into a bespoke piece of hardware, it would certainly be at least one or two orders of magnitude faster and more efficient, with the downside that since it's dead silicon it could not be changed and iterated on.

            • mindcrime 6 hours ago

              the reason why they're so inefficient is not algorithmic, but purely architectural.

              I would agree with that, with the caveat that in my mind "the architecture" and "the algorithm" are sort of bound up with each other. That is, one implies the other -- to some extent.

              And yes, fair point that building dedicated hardware might just be part of the solution to making something that runs much more efficiently.

              The only other thing I would add, is that - relative to what I said in the post above - when I talk about "algorithmic advances" I'm looking at everything as potentially being on the table - including maybe something different from ANN's altogether.

            • penteract 5 hours ago

              If you account for the fact that biological neurons operate at a much lower frequency than silicon processors, then the raw performance gets much closer. From what I can find, neuron membrane time constant is around 10ms [1], meaning 10 billion neurons could have 1 trillion activations per second, which is in the realm of modern hardware.

              People mentioned in [2] have done the calculations from a more informed position than I have, and reach numbers like 10^17 FLOPS when doing a calculation that resembles this one.

              [1] https://spectrum.ieee.org/fast-efficient-neural-networks-cop...

              [2] https://aiimpacts.org/brain-performance-in-flops/

            • chasd00 4 hours ago

              > If we suppose that ANNs are more or less accurate models of real neural networks

              i believe the problem is we don't understand actual neurons let alone actual networks of neurons to even know if any model is accurate or not. The AI folks cleverly named their data structures "neuron" and "neural network" to make it seem like we do.

              • munksbeer 3 hours ago

                This is a bit of a cynical take. Neural networks have been "a thing" for decades. A quick google suggests 1940s. I won't quibble on the timeline but no-one was trying to trick anyone with the name back then, and it just stuck around.

            • eikenberry 6 hours ago

              > If we suppose that ANNs are more or less accurate models of real neural networks [..]

              IANNs were inspired by biological neural structures and that's it. They are not representative models at all, even of the "less" variety. Dedicated hardware will certainly help, but no insights into how much it can help will come from this sort of comparison.

              • penteract 4 hours ago

                Could you explain your claim that ANNs are nothing like real neural networks beyond their initial inspiration (if you'll accept my paraphrasing). I've seen it a few times on HN, and I'm not sure what people mean by it.

                By my very limited understanding of neural biology, neurons activate according to inputs that are mostly activations of other neurons. A dot product of weights and inputs (i.e. one part of matrix multiplication) together with a threshold-like function doesn't seem like a horrible way to model this. On the other hand, neurons can get a bit fancier than a linear combination of inputs, and I haven't heard anything about biological systems doing something comparable to backpropogation, but I'd like to know whether we understand enough to say for sure that they don't.

                • scheme271 13 minutes ago

                  Neurons don't just work on electrical potentials, they also have a multiple whole systems of neurotransmitters that affect their operation. So I don't think their activation is a continuous function. Although I suppose we could use non-continuous functions for activations in a NN, I don't think there's an easy way to train a NN that does that.

                • fluoridation 4 hours ago

                  >I haven't heard anything about biological systems doing something comparable to backpropogation

                  The brain isn't organized into layers like ANNs are. It's a general graph of neurons and cycles are probably common.

                  • HarHarVeryFunny 3 hours ago

                    Actually that's not true. Our neocortex - the "crumpled up" outer layer of our brain, which is basically responsible for cognition/intelligence, has a highly regular architecture. If you uncrumpled it, it'd be a thin sheet of neurons about the size of a teatowel, consisting of 6 layers of different types of neurons with a specific inter-layer and intra-layer pattern of connections. It's not a general graph at all, but rather a specific processing architecture.

          • lawlessone 6 hours ago

            Deepmind where experimenting with this https://github.com/google-deepmind/lab a few years ago.

            Having AI agents learn to see, navigate and complete tasks in a 3d environment. I feel like it had more potential than LLMs to become an AGI (if that is possible).

            They haven't touched it in a long time though. But Genie 3 makes me think they haven't completely dropped it.

        • foobarian 4 hours ago

          > Scaling AI will require an exponential increase in compute and processing power,

          I think there is something more happening with AI scaling; I think the scaling factor per user is a lot higher and a lot more expensive. Compare to the big initial internet companies. You added one server you could handle thousands more users; incremental cost was very low, not to mention the revenue captured through whatever adtech means. Not so with AI workloads; they are so much more expensive than ad revenue it's hard to break even even with an actual paid subscription.

        • thfuran 6 hours ago

          We know for a fact that human level general intelligence can be achieved on a relatively modest power budget. A human brain runs on somewhere from about 20-100W, depending on how much of the rest of the body's metabolism you attribute to supporting it.

      • armada651 4 hours ago

        > The groundwork has been laid, and it's not too hard to see the shape of things to come.

        The groundwork for VR has also been laid and it's not too hard to see the shape of things to come. Yet VR hasn't moved far beyond the previous hype cycle 10 years ago, because some problems are just really, really hard to solve.

        • jimbokun 2 hours ago

          Is it still giving people headaches and making them nauseous?

          • armada651 an hour ago

            Yes, it still gives people headaches because the convergence-accommodation conflict remains unsolved. We have a few different technologies to address that, but they're expensive, don't fully address the issue, and none of them have moved beyond the prototype stage.

            Motion sickness can be mostly addressed with game design, but some people will still get sick regardless of what you try. Mind you, some people also get motion sick by watching a first-person shooter on a flat screen, so I'm not sure we'll ever get to a point where no one ever gets motion sick in VR.

            • duderific 13 minutes ago

              > Mind you, some people also get motion sick by watching a first-person shooter on a flat screen

              Yep I'm that guy. I blame it on being old.

      • matthewdgreen 4 hours ago

        As someone who was a customer of Netflix from the dialup to broadband world, I can tell you that this stuff happens much faster than you expect. With AI we're clearly in the "it really works, but there are kinks and scaling problems" of, say, streaming video in 2001 -- whereas I think you mean to indicate we're trying to do Netflix back in the 1980s where the tech for widespread broadband was just fundamentally not available.

        • tracker1 3 hours ago

          Oh, like RealPlayer in the late 90's (buffering... buffering...)

      • skeezyboy 9 hours ago

        >I have an overwhelming feeling that what we're trying to do here is "Netflix over DialUp."

        I totally agree with you... though the other day, I did think the same thing about the 8bit era of video games.

        • dml2135 7 hours ago

          It's a logical fallacy that just because some technology experienced some period of exponential growth, all technology will always experience constant exponential growth.

          There are plenty of counter-examples to the scaling of computers that occurred from the 1970s-2010s.

          We thought that humans would be traveling the stars, or at least the solar system, after the space race of the 1960s, but we ended up stuck orbiting the earth.

          Going back further, little has changed daily life more than technologies like indoor plumbing and electric lighting did in the late 19th century.

          The ancient Romans came up with technologies like concrete that were then lost for hundreds of years.

          "Progress" moves in fits and starts. It is the furthest thing from inevitable.

          • novembermike 2 hours ago

            Most growth is actually logistic. An S shaped curve that starts exponential but slows down rapidly as it reaches some asymptote. In fact basically everything we see as exponential in the real world is logistic.

          • jopsen 5 hours ago

            True, but adoption of AI has certainly seen exponential growth.

            Improvement of models may not continue to be exponential.

            But models might be good enough, at this point it seems more like they need integration and context.

            I could be wrong :)

            • tracker1 3 hours ago

              At what cost though? Most AI operations are losing money, using a lot of power, including massive infrastructure costs, not to mention the hardware costs to get going, and that isn't even covering the level of usage many/most want, and certainly aren't going to pay even $100s/month per person that it currently costs to operate.

              • martinald an hour ago

                This is a really basic way to look at unit economics of inference.

                I did some napkin math on this.

                32x H100s cost 'retail' rental prices about $2/hr. I would hope that the big AI companies get it cheaper than this at their scale.

                These 32 H100s can probably do something on the order of >40,000 tok/s on a frontier scale model (~700B params) with proper batching. Potentially a lot more (I'd love to know if someone has some thoughts on this).

                So that's $64/hr or just under $50k/month.

                40k tok/s is a lot of usage, at least for non-agentic use cases. There is no way you are losing money on paid chatgpt users at $20/month on these.

                You'd still break even supporting ~200 Claude Code-esque agentic users who were using it at full tilt 40% of the day at $200/month.

                Now - this doesn't include training costs or staff costs, but on a pure 'opex' basis I don't think inference is anywhere near as unprofitable as people make out.

                • tracker1 13 minutes ago

                  My thought is closer to the developer user who would want to have their codebase as part of the queries along with heavy use all day long... which is closer to my point that many users are less likely to spend hundreds a month, at least with the current level of results people get.

                  That said, you could be right, considering Claude max's price is $100/mo... but I'm not sure where that is in terms of typical, or top 5% usage and the monthly allowance/usage.

            • BobaFloutist 29 minutes ago

              > True, but adoption of AI has certainly seen exponential growth.

              I mean, for now. The population of the world is finite, and there's probably a finite number of uses of AI, so it's still probably ultimately logistic

        • dormento 5 hours ago

          > I did think the same thing about the 8bit era of video games.

          Can you elaborate? That sounds interesting.

        • echelon 9 hours ago

          Speaking of Netflix -

          I think the image, video, audio, world model, diffusion domains should be treated 100% separately from LLMs. They are not the same thing.

          Image and video AI is nothing short of revolutionary. It's already having huge impact and it's disrupting every single business it touches.

          I've spoken with hundreds of medium and large businesses about it. They're changing how they bill clients and budget projects. It's already here and real.

          For example, a studio that does over ten million in revenue annually used to bill ~$300k for commercial spots. Pharmaceutical, P&G, etc. Or HBO title sequences. They're now bidding ~$50k and winning almost everything they bid on. They're taking ten times the workload.

          • AnotherGoodName 9 hours ago

            Fwiw LLMs are also revolutionary. There's currently more anti-AI hype than AI hype imho. As in there's literally people claiming it's completely useless and not going to change a thing. Which is crazy.

            • lokar 8 hours ago

              That’s an anecdote about intensity, not volume. The extremes on both sides are indeed very extreme (no value, replacing most white collar jobs next year).

              IME the volume is overwhelming on the pro-LLM side.

              • whatevertrevor 4 hours ago

                Yeah the conversation on both extremes feels almost religious at times. The pro LLM hype feels more disconcerting sometimes because there are literally billions if not trillions of dollars riding on this thing, so people like Sam Altman have a strong incentive to hype the shit out of it.

            • Jensson 6 hours ago

              One sides extremes says LLM wont change a thing, the other sides extremes says LLM will end the world.

              I don't think the ones saying it wont change a thing are the most extreme here.

              • wyre 5 hours ago

                Luckily for humanity reality is somewhere in between extremes, right?

          • jaimebuelta 8 hours ago

            I see the point at the moment on “low quality advertising”, but we are still far from high quality video generated for AI.

            It’s the equivalent of those cheap digital effects. They look bad for a Hollywood movie, but it allows students to shot their action home movies

          • didibus 9 hours ago

            You're right, and I also think LLMs have an impact.

            The issue is the way the market is investing they are looking for massive growth, in the multiples.

            That growth can't really come from trading cost. It has to come from creating new demand for new things.

            I think that's what not happened yet.

            Are diffusion models increasing the demand for video and image content? Is it having customers spend more on shows, games, and so on? Is it going to lead to the creation of a whole new consumption medium ?

            • jopsen 5 hours ago

              > Is it going to lead to the creation of a whole new consumption medium ?

              Good question? Is that necessary, or is it sufficient for AI to be integrated in every kind of CAD/design software out there?

              Because I think most productivity tools whether CAD, EDA, Office, graphic 2d/3d design, etc will benefit from AI. That's a huge market.

              • didibus 3 hours ago

                I guess there are two markets to consider.

                The market of the AI foundation models itself, will they have customers long term willing to pay a lot of money for access to the models?

                I think yes, there will be demand for foundational AI models, and a lot of it.

                The second market is the market of CAD, EDA, Office, graphic 2d/3d design, etc. This market will not grow because they integrate AI into their products, or that is the question, will it? Otherwise, you could almost hypothesize these market will shrink as AI is going to be for them an additional cost of business that customers will expect to be included. Or maybe they manage to sell to their customers a premium for the AI features where they take a cut above that of what they pay the foundational models under the hood, that's a possibility.

          • mh- 9 hours ago

            It's quite incredible how fast the generative media stuff is moving.

            The self-hostable models are improving rapidly. How capable and accessible WAN 2.2 (text+image to video; fully local if you have the VRAM) is feels unimaginable from last year when OpenAI released Sora (closed/hosted).

          • realo 8 hours ago

            As long as you do not make ads with four-fingered hands, like those clowns ... :)

            https://www.lapresse.ca/arts/chroniques/2025-07-08/polemique...

            • echelon 8 hours ago

              https://www.npr.org/2025/06/23/nx-s1-5432712/ai-video-ad-kal...

              Typical large team $300,000 ad made for < $2,000 in a weekend by one person.

              It's going to be a bloodbath.

              • neaden 5 hours ago

                This ad was purposefully playing off the fact that it was AI though, it was a large amount of short bizarre things like two old women selling Fresh Manatee out of the back of a truck. You couldn't replace a regular ad with this.

              • mjr00 7 hours ago

                > Kalshi's Jack Such declined to disclose Accetturo's fee for creating the ad. But, he added, "the actual cost of prompting the AI — what is being used in lieu of studios, directors, actors, etc. — was under $2,000."

                So in other words, if you ignore the costs of paying people to create the ad, it barely costs anything. A true accounting miracle!

                • echelon 6 hours ago

                  Do you pay people to pump your gas?

                  How about harvesting your whale blubber to power your oil lamp at night?

                  The nature of work changes all the time.

                  If an ad can be made with one person, that's it. We're done. There's no going back to hiring teams of 50 people.

                  It's stupid to say we must hire teams of 50 to make an advertisement just because. There's no reason for that. It's busy work. The job is to make the ad, not to give 50 people meaningless busy work.

                  And you know what? The economy is going to grow to accommodate this. Every single business is now going to need animated ads. The market for video is going to grow larger than we've ever before imagined, and in ways we still haven't predicted.

                  Your local plumber is going to want a funny action movie trailer slash plumbing advertisement to advertise their services. They wouldn't have even been in the market before.

                  You're going to have silly videos for corporate functions. Independent filmmakers will be making their own Miyazaki and Spielberg epics that cater to the most niche of audiences - no more mass market Marvel that has to satisfy everybody, you're going to see fictional fantasy biopic reimaginings of Grace Hopper fighting the vampire Nazis. Whatever. There'll be a market for everything, and 100,000 times as many creators with actual autonomy.

                  In some number of years, there is going to be so much more content being produced. More content in single months than in all human history up to this point. Content that caters to the very long tail.

                  And you know what that means?

                  Jobs out the wazoo.

                  More jobs than ever before.

                  They're just going to look different and people will be doing more.

              • dingnuts 7 hours ago

                oh no the poor advertisers

                • whatevertrevor 4 hours ago

                  Cheaper poorer quality ads means a bad time for us, people who are being incessantly targeted by this crap.

                  Websites are already finding creative ways around DNS blocklists for ads serving.

      • bob1029 7 hours ago
      • Q6T46nT668w6i3m 9 hours ago

        There’s no evidence that it’ll scale like that. Progress in AI has always been a step function.

        • ghurtado 9 hours ago

          There's also no evidence that it won't, so your opinion carries exactly the same weight as theirs.

          > Progress in AI has always been a step function.

          There's decisively no evidence of that, since whatever measure you use to rate "progress in AI" is bound to be entirely subjective, especially with such a broad statement.

          • ezst 6 hours ago

            > There's also no evidence that it won't

            There are signs, though. Every "AI" cycle, ever, has revolved around some algorithmic discovery, followed by a winter in search for the next one. This one is no different and propped up by LLMs, whose limitations we know quite well by now: "intelligence" is elusive, throwing more compute at them produces vastly diminishing returns, throwing more training data at them is no longer feasible (we came short of it even before the well got poisoned). Now the competitors are stuck at the same level, within percent points of one another, with the difference explained by fine-tuning techniques and not by technical prowess. Unless a cool new technique come yesterday to dislodge LLMs, we are in for a new winter.

            • mschuster91 an hour ago

              Oh, I believe that while LLMs are a dead end now, the applications of AI in vision and physical (i.e. robots with limbs) world will usher in yet another wrecking of the lower classes of society.

              Just as AI has killed off all demand for lower-skill work in copywriting, translation, design and coding, it will do so for manufacturing. And that will be a dangerous bloodbath because there will not be enough juniors any more to replace seniors aging out or quitting in frustration of being reduced to cleaning up AI crap.

          • dml2135 7 hours ago

            What is your definition of "evidence" here? The evidence, in my view, are physical (as in, available computing power) and algorithmic limitations.

            We don't expect steel to suddenly have new properties, and we don't expect bubble sort to suddenly run in O(n) time. You could ask -- well what is the evidence they won't, but it's a silly question -- the evidence is our knowledge of how things work.

            Saying that improvement in AI is inevitable depends on the assumption of new discoveries and new algorithms beyond the current corpus of machine learning. They may happen, or they may not, but I think the burden of proof is higher on those spending money in a way that assumes it will happen.

          • abc_lisper 6 hours ago

            What do you call GPT 3.5?

        • the8472 8 hours ago

          rodent -> homo sapiens brain scales just fine? It's tenuous evidence, but not zero.

        • ninetyninenine 9 hours ago

          Uh it’s been multiple repeated step ups in the last 15 years. The trend line is up up up.

        • eichin 6 hours ago

          The innovation here is that the step function didn't traditionally go down

      • i_love_retros 7 hours ago

        Is some potential AGI breakthrough in the future going to be from LLMs or will they plateau in terms of capabilities?

        Its hard for me to imagine Skynet growing from chatgpt

        • whatevaa 6 hours ago

          The old story of paperclip AI shows that AGI is not needed for sufficiently smart computer to be dangerous.

      • thefourthchime 8 hours ago

        I'm starting to agree with this viewpoint. As the technology seems to solidify to roughly what we can do now, the aspirations are going to have to get cut back until there's a couple more breakthroughs.

      • kokanee 6 hours ago

        I'm not convinced that the immaturity of the tech is what's holding back the profits. The impact and adoption of the tech are through the roof. It has shaken the job market across sectors like I've never seen before. My thinking is that if the bubble bursts, it won't be because the technology failed to deliver functionally; it will be because the technology simply does not become as profitable to operate as everyone is betting right now.

        What will it mean if the cutting edge models are open source, and being OpenAI effectively boils down to running those models in your data center? Your business model is suddenly not that different from any cloud service provider; you might as well be Digital Ocean.

      • nutjob2 9 hours ago

        > We're clearly seeing what AI will eventually be able to do

        I think this is one of the major mistakes of this cycle. People assume that AI will scale and improve like many computing things before it, but there is already evidence scaling isn't working and people are putting a lot of faith in models (LLMs) structurally unsuited to the task.

        Of course that doesn't mean that people won't keep exploiting the hype with hand-wavy claims.

    • StopDisinfo910 4 hours ago

      > A couple of years ago, I asked a financial investment person about AI as a trick question. She did well by recommending investing in companies that invest in AI (like MS) but who had other profitable businesses (like Azure)

      If you had actually invested in AI pure players and Nvidia, the shovel seller, a couple years ago and were selling today, you would have made a pretty penny.

      The hard thing with potential bubbles is not entirely avoiding them, it’s being there early enough and not being left at the end holding the bag.

      • bcrosby95 4 hours ago

        Financial advisors usually work on wholistic plans not short term ones. It isn't about timing markets its about a steady hand that doesn't panic and makes sure you don't get caught with your pants down when you need cash.

        • what_ever 4 hours ago

          Hard to know what OP asked for but if they asked for AI specifically, the advise does not need to be holistic.

      • aksss 4 hours ago

        Are you bearish on the shovel seller? Is now the time to sell out? I'm still +40% on nvda - quite late to the game but people still seem to be buying the shovels.

        • BrawnyBadger53 2 hours ago

          Personal opinion, I'm bearish on the shovel seller long term because the companies that are training AI are likely to build their own hardware. Google already does this. Seems like a matter of time for the rest of the mag 7 to join. The rest of the buyers aren't growing enough to offset that loss imo.

          • godelski an hour ago

            FWIW, Nvidia's moat isn't hardware and they know this (they even talk about it). Hardware wise AMD is neck and neck with them, but AMD still doesn't have a CUDA equivalent. CUDA is the moat. As painful as it is to use, there's a long way to go for companies like AMD to compete here. Their software is still pretty far behind, despite their rapid and impressive advancements. It will also take time to get developer experience to saturate within the market, and that will likely mean AMD needs some good edge over Nvidia, like adding things Nvidia can't do or being much more cost competitive. And that's not something like adding more VRAM or just taking smaller profit margins because Nvidia can respond to those fairly easily.

            That said, I still suggested the parent sell. Real money is better than potential money. Classic gambler's fallacy, right? FOMO is letting hindsight get in the way of foresight.

        • godelski an hour ago

          What's the old Rockefeller clique? When your shoe shiner is giving you stock advice it is time to sell (may have heard the taxicab driver version).

          It depends on how risk adverse you are and how much money you have there.

          If you're happy with those returns, sell. FOMO is dumb. You can't time the market, the information just isn't available. If those shares are worth a meaningful amount of money, sell. Take your wins and walk away. A bird in your hand is worth more than two in the bush, right? That money isn't worth anything until it is realized[0].

          Think about it this way: how much more would you need to make to risk making nothing? Or losing money? This is probably the most important question when investing.

          If you're a little risk adverse or a good chunk of your profile is in it, sell 50-80% of it and then diversify. You're taking wins and restructuring.

          If you wanna YOLO, then YOLO.

          My advice? Don't let hindsight get in the way of foresight.

          [0] I had some Nvidia stocks at 450 and sold at 900 (before the split, so would be $90 today). I definitely would have made more money if I kept them. Almost double if I sold today! But I don't look back for a second. I sold those shares and was able to pay off my student debt. Having this debt paid off is still a better decision in my mind because I can't predict the future. I could have sold 2 weeks later and made less! Or even in April of this year and made the same amount of money.

        • StopDisinfo910 4 hours ago

          I have absolutely no clue whatsoever. I have zero insider information. For all I know, the bubble could pop tomorrow or we might be at the beginning of a shift of a similar magnitude to the industrial revolution. If I could reliably tell, I wouldn’t tell you anyway. I would be getting rich.

          I’m just amused by people who think they are financially more clever by taking conservative positions. At that point, just buy ETF. That’s even more diversification that buying Microsoft.

    • torginus 10 hours ago

      It boggles the mind that this kind of management is what it takes to create one of the most valuable companies in the world (and becoming one of the world's richest in the process).

      • benterix 9 hours ago

        It's a cliche but people really underestimate and try to downplay the role of luck[0].

        [0] https://www.scientificamerican.com/blog/beautiful-minds/the-...

        • Aurornis 5 hours ago

          People also underestimate the value of maximizing opportunities for luck. If we think of luck as random external chance that we can't control, then what can we control? Doing things that increase your exposure to opportunities without spreading yourself too thin is the key. Easier said than done to strike that balance, but getting out there and trying a lot of things is a viable strategy even if only a few of them pay off. The trick is deciding how long to stick with something that doesn't appear to be working out.

          • not_the_fda 3 hours ago

            Sure helps to be born wealthy, go to private school, and Ivy League college.

        • jauntywundrkind 9 hours ago

          Luck. And capturing strong network effect.

          The ascents of the era all feel like examples of anti-markets, of having gotten yourself into an intermediary position where you control both side's access.

        • alecsm 9 hours ago

          Success happens when luck meets hard work.

        • ericd 6 hours ago

          Ability vastly increases your luck surface area. A single poker hand has a lot of luck, and even a game, but over long periods, ability starts to strongly differentiate peoples' results.

          • whatevertrevor 4 hours ago

            Except you can play hundreds of thousands of poker hands in your lifetime, but only have time/energy/money to start a handful of businesses.

          • quantified 5 hours ago

            Win a monster pot and you can play a lot of more interesting hands.

        • UltraSane 9 hours ago

          Every billionaire could have died from childhood cancer.

      • jocaal 10 hours ago

        Past a certain point, skill doesn't contribute to the magnitude of success and it becomes all luck. There are plenty of smart people on earth, but there can only be 1 founder of facebook.

        • vovavili 10 hours ago

          Plenty of smart people prefer not to try their luck, though. A smart but risk-avoidant person will never be the one to create Facebook either.

          • estearum 10 hours ago

            Plenty of them do try and fail, and then one succeeds, and it doesn't mean that person is intrinsically smarter/wiser/better/etc than the others.

            There are far, far more external factors on a business's success than internal ones, especially early on.

            • skeezyboy 9 hours ago

              for instance if that social network film by david fincher hadnt come out, would we have even heard of this mark guy?

              • dylan604 9 hours ago

                But then we wouldn't have had that great soundtrack from Trent and Atticus

          • dgfitz 9 hours ago

            What risk was there in creating facebook? I don't see it.

            Dude makes a website in his dorm room and I guess eventually accepts free money he is not obligated to pay back.

            What risk?

            • CamperBob2 7 hours ago

              Once you go deep enough into a personal passion project like that, you run a serious risk of flunking out of school. For most people that feels like a big deal. And for those of us with fewer alternatives in life, it's usually enough to keep us on the straight and narrow path.

              People from wealthy backgrounds often have less fear of failure, which is a big reason why success disproportionately favors that clique. But frankly, most people in that position are more likely to abuse it or ignore it than to take advantage of it. For people like Zuckerberg and Dell and Gates, the easiest thing to do would have been to slack off, chill out, play their expected role and coast through life... just like most of their peers did.

        • miki123211 9 hours ago

          I view success as the product of three factors, luck, skill and hard work.

          If any of these is 0, you fail, regardless of how high the other two are. Extraordinary success needs all three to be extremely high.

          • whodidntante 9 hours ago

            There is another dimension, which is mostly but not fully characterized as perseverance, but many times with an added dose of ruthlessness

            Microsoft, Facebook, Uber, google and many others all had strong doses of ruthlessness

            • woooooo 9 hours ago

              Metaverse and this AI turnaround are characterized by the LACK of perseverance, though. They remind me of the time I bought a guitar and played it for three months.

              • whodidntante 2 hours ago

                True, but I was around and saw first hand how Zuckerberg dominated social networking. He was pretty ruthless when it came to both business and technology, and he instilled in his team a religious fervor.

                There is luck (and skill) involved when new industries form, with one or a very small handful of companies surviving the many dozens of hopefuls. The ones who do survive, however, are usually the most ruthlessness and know how to leverage skill, business, markets.

                It does not mean that they can repeat their success when their industry changes or new opportunities come up.

              • throwway120385 8 hours ago

                When you put the guitar down after three months it's one thing, but when you reverse course on an entire line of development in a way that might affect hundreds or thousands of employees it's a failure of integrity.

              • ghurtado 8 hours ago

                > They remind me of the time I bought a guitar and played it for three months.

                This is now my favorite way of describing fleeting hype-tech.

          • benterix 9 hours ago

            Or you can just have rich parents and do nothing, and still be considered successful. What you say only applies to people who start from zero, and even then I'd call luck the dominant factor (based on observing my skillful and hardworking but not really successful friends).

          • nirav72 9 hours ago

            >luck, skill and hard work.

            Another key component is knowing the right people or the network you're in. I've known a few people that lacked 2 of those 3 things and yet somehow succeeded. Simply because of the people they knew.

            • Jensson 8 hours ago

              > I've known a few people that lacked 2 of those 3 things and yet somehow succeeded

              Succeeded in making something comparable to facebook? Who are those?

              • nirav72 7 hours ago

                No. Nothing of that scale. I was replying to OP's take on the 3 factors that lead to success in general. I was simply pointing out a 4th factor that plays a big role.

      • _Algernon_ 9 hours ago

        You should read Careless People if this boggles your mind.

      • ghurtado 9 hours ago

        When you start to think about who exactly determines what makes a valuable company, and if you believe in the buffalo herd theory, then it makes a little bit of sense.

      • ninetyninenine 9 hours ago

        Giving 1.5 million salary is nothing for these people.

        It shouldn’t be mind boggling. They see revolutionary technology that has potential to change the world and is changing the world already. Making a gamble like that is worth it because losing is trivial compared to the upside of success.

        You are where you are and not where they are because your mind is boggled by winning strategies that are designed to arrive at success through losing and dancing around the risk of losing.

        Obviously mark is where he is also because of luck. But he’s not an idiot and clearly it’s not all luck.

        • epolanski 7 hours ago

          But how is it worth for meta, since they won't really monetize it.

          At least the others can kinda bundle it as a service.

          After spending tens of billions in AI how has it impacted a single dollar on meta's revenue?

          • amalcon an hour ago

            The not-so-secret is that the "killer apps" for deep neural networks are not LLMs or diffusion models. Those are very useful, but the most valuable applications in this space are content recommendation and ad targeting. It's obvious how Meta can use those things.

            The genAI stuff is likely part talent play (bring in good people with the hot field and they'll help with the boring one), part R&D play (innovations in genAI are frequently applicable to ad targeting), and part moonshot (if it really does pan out in the way boosters seem to think, monetization won't really be a problem).

          • ninetyninenine 7 hours ago

            >But how is it worth for meta, since they won't really monetize it.

            Meta needs growth as there main platform is slowing down. To move forward they need to gamble on potential growth. VR was a gamble. They bombed that one. This is another gamble.

            They're not stupid. Like all the risks you're aware of, they're also aware of. They were aware of the risks for VR too. They need to find a new high growth niche. Gambling on something with even a 40% chance of exploding into success is a good bet for them given there massive resources.

      • saubeidl 10 hours ago

        It all makes much more sense when you start to realize that capitalism is a casino in which the already rich have a lot more chips to bet and meritocracy is a comforting lie.

      • balamatom 9 hours ago

        I'll differ from the siblingposters who compare it to the luck of the draw, essentially explaining this away as the excusable randomness of confusion rather than the insidious evil of stupidity; while the "it's fraud" perspective presumes a solid grasp of which things out there are not fraud besides those which are coercion, but that's not a subject I'm interested in having an opinion about.

        Instead, think of whales for a sec. Think elephants - remember those? Think of Pando the tree, the largest organism alive. Then compare with one of the most valuable companies in the world. To a regular person's senses, the latter is a vaster and more complex entity than any tree or whale or elephant.

        Gee, what makes it grow so big though? The power of human ambition?

        And here's where I say, no, it needs to be this big, because at smaller scales it would be too dumb to exist.

        To you and me it may all look like the fuckup of some Leadership or Management, a convenient concept beca corresponding to a mental image of a human or group of humans. That's some sort of default framing, such as can only be provided to boggle the mind; considering that they'll keep doing this and probably have for longer than I've been around. The entire Internet is laughing at Zuckerberg for not looking like their idea of "a person" but he's not the one with the impostor syndrome.

        For ours are human minds, optimized to view things in term of person-terms and Dunbar-counts; even the Invisible Hand of the market is hand-shaped. But last time I checked my hand wasn't shaped anything like the invisible network of cause and effect that the metaphor represents; instead I would posit that for an entity like Facebook, to perform an action that does not look completely ridiculous from the viewpoint of an individual observer, is the equivalent an anatomical impossibility. It did evolve after all from American college students

        See also: "Beyond Power / Knowledge", Graeber 2006.

        • ghurtado 8 hours ago

          why is there so much of this on HN? I'm on a few social networks, but this is the only one where I find this kind of quasi-spiritual, stream of consciousness, word length steadily increasing, pseudo-technical, word salad diatribes?

          It's very unique to this site and these type of comments all have an eerily similar vibe.

          • Karrot_Kream 5 hours ago

            This is pretty common on HN but not unique to it. Lots of rationalist adjacent content (like stuff on LessWrong, replies to Scott Alexander's substack, etc) has it also. Here I think it comes from users that try to intellectualize their not-very-intellectual, stream of consciousness style thoughts, as if using technical jargon to convey your feelings makes them more rational and less emotional.

            • ghurtado 4 hours ago

              Thank you.

              I find this type of thing really interesting from a psychological perspective.

              A bit like watching videos of perpetual motion machines and the like. Probably says more about me than it does about them, though.

              • Karrot_Kream 4 hours ago

                Good for you! I wish I were wired that way.

                Unfortunately this kind of talk really gets under my skin and has made me have to limit my time on this site because it's only gotten more prevalent as the site has gotten more popular. I'm just baffled that so much content on this forum is people who seem to think their feelings-oriented reactions are in fact rational truths.

                • ghurtado 2 hours ago

                  Well, don't take me wrong, I get annoyed by it too.

                  But in the distant past, I would engage with this type of comment online, and that was a bad decision 100% of the time.

                  And to be fair, I'm sure many of these people are smart, they are just severely lacking in the social intelligence department.

          • JumpCrisscross 2 hours ago

            Between “presumes a solid grasp of which things out there are not fraud besides those which are coercion, but that's not a subject I'm interested in having an opinion about,” before going on and sharing an opinion on that subject, and “even the Invisible Hand of the market is hand-shaped,” I think it may just be AI slop.

          • balamatom 6 hours ago

            >why is there so much of this on HN?

            Where?

      • PhantomHour 10 hours ago

        The answer is fairly straightforward. It's fraud, and lots of it.

        A honest businessman wouldn't put their company into a stock bubble like this. Zuckerberg runs his mouth and tells investors what they want to hear, even if it's unbacked.

        A honest businessman would never have gotten Facebook this valuable because so much of the value is derived from ad-fraud that Facebook is both party to and knows about.

        A honest businessman would never have gotten Facebook this big because it's growth relied extensively on crushing all competition through predatory pricing, illegal both within the US and internationally as "dumping".

        Bear in mind that these are all bad as they're unsustainable. The AI bubble will burst and seriously harm Meta. They would have to fall back on the social media products they've been filling up with AI slop. If it takes too long for the bubble to burst, if zuckerberg gets too much time to shit up Facebook, too much time for advertisers to wisen up to how many of their impressions are bots, they might collapse entirely.

        The rest of Big Tech is not much better. Microsoft and Google's CEOs are fools who run their mouth. OpenAI's new "CEO of apps" is Facebook's pivot-to-video ghoul.

        • NickC25 9 hours ago

          As I've said in other comments - expecting honesty and ethical behavior from Mark Zuckerberg is a fool's errand at best. He has unchecked power and cannot be voted out by shareholders.

          He will say whatever he wants and because the returns have been pretty decent so far, people will just take his word for it. There's not enough class A shares to actually force his hand to do anything he doesn't want to do.

          • PhantomHour 6 hours ago

            Zuckerberg started as a sex pest and got not an iota better.

            But we could, as a society, stop rewarding him for this shit. He'd be an irrelevant fool if we had appropriate regulations around the most severe of his misdeeds.

            • NickC25 6 hours ago

              Unfortunately I think that ship has sailed.

              And since we live in the era of the real golden rule (i.e "he who has the gold makes the rules), there's no chance that we'll ever get the chance to catch the ship. Mark lives in his own world, because we gave him a quarter trillion dollars and never so much as slapped him on the wrist.

        • dgs_sgd 6 hours ago

          What is a good resource to read about the ad fraud? This is the first I'm hearing of that.

          • jbreckmckye 3 hours ago

            I used to work in adtech. I don't have any direct information but, I assume this relates to the persistent rumours that Facebook inflates impressions and turns a blind eye to bot activity.

        • travisgriggs 8 hours ago

          Ha ha.

          You used “honest” and “businessman” in the same sentence.

          Good one.

    • raydev 4 hours ago

      > It almost seems as though the record-setting bonuses they were dolling out to hire the top minds in AI might have been a little shortsighted

      Or the more likely explanation is that they feel they've completed the hiring necessary to figure out what's next.

    • blitzar 9 hours ago

      > record-setting bonuses they were dolling out to hire the top minds in AI

      That was soooo 2 weeks ago.

    • baxtr 6 hours ago

      > …lot of jobs will disappear.

      So it’s true that AI will kill jobs, but not in the way they’ve imagined?!

    • epolanski 7 hours ago

      > A couple of years ago, I asked a financial investment person about AI as a trick question.

      Why do you assume this people know any better than average Joe on the street?

      Study after study demonstrates they can't even keep up with the market benchmarks, how would they be any wiser to tell you what's a fad or not.

      • quantified 5 hours ago

        I think the point of the question was to differentiate this person from the average Jane on the Street.

        • epolanski 4 hours ago

          But half the Janes will hold similar views and positions.

    • hbosch 6 hours ago

      >It almost seems as though the record-setting bonuses they were dolling out to hire the top minds in AI might have been a little shortsighted.

      Everything zuck has done since the "dawn of AI" has been to intentionally subvert and sabotage existing AI players, because otherwise Meta would be too far behind. In the same way that AI threatens Search, we are seeing emergently that AI is also threatening social networks -- you can get companionship, advice, information, emotional validation, etc. directly from an AI. People are forming serious relationships with these things in as much a real way as you would with anyone else on Facebook or Instagram. Not to mention, how long before most of the "people" on those platforms are AI themselves?

      I believe exactly 0 percent of the decision to make Llama open-source and free was done altruistically as much as it was simply to try and push the margins of Anthropic, OpenAI, etc. downward. Indeed, I feel like even the fearmongering of this article is also strategically intended to devalue AI incumbents. AI is very much an existential threat to Meta.

      Is AI currently fulfilling the immense hype around it? In my opinion, maybe not, but the potential value is obvious. Much more obvious than, for example, NFTs and crypto just a few years ago.

      • azinman2 6 hours ago

        > AI is very much an existential threat to Meta.

        How so?

        • hdgvhicv 2 hours ago

          “you can get companionship, advice, information, emotional validation, etc. directly from an AI. People are forming serious relationships with these things in as much a real way as you would with anyone else on Facebook or Instagram. Not to mention, how long before most of the "people" on those platforms are AI themselves?”

          • azinman2 2 hours ago

            Meta doesn’t really serve companionship. It used to make Yu connected to others in your social graph, which AI cannot replace. If IG still has the eyeballs, people can put AI generated content on it with or without meta’s permission.

            Like with most things, people will want what’s expensive and not what’s cheap. AI is cheap, real humans are not. Why buy diamonds when you can’t tell the difference with cubic zirconia? And yet demand for diamonds only increases.

    • mrits 10 hours ago

      I think we will see the opposite. If we made no progress with LLMs we'd still have huge advancements and growth opportunities enhancing the workflows and tuning them to domain specific tasks.

      • evilduck 10 hours ago

        I think you could both be right at the same time. We will see a large number of VC funded AI startup companies and feature clones vanish soon, and we will also see current or future LLMs continue to make inroads into existing business processes and increase productivity and profitability.

        Personally, I think what we will witness is consolidation and winner-takes-all scenarios. There just isn't a sustainable market for 15 VS Code forks all copying each other along with all other non-VS Code IDEs cloning those features in as fast as possible. There isn't space for Claude Code, Gemini CLI, Qwen Code, Opencode all doing basically the same thing with their special branding when the thing they're actually selling is a commoditized LLM API. Hell, there _probably_ isn't space for OpenAI and Anthropic and Google and Mistral and DeepSeek and Alibaba and whoever else, all fundamentally creating and doing the same thing globally. Every single software vendor can't innovate and integrate AI features faster than AI companies themselves can build better tooling to automate that company's tools for them. It reeks of the 90's when there were a dozen totally viable but roughly equal search engines. One vendor will eventually pull ahead or have a slightly longer runway and claim the whole thing.

      • sebstefan 9 hours ago

        I agree with this, but how will these companies make money? Short of a breakthrough, the consumer isn't ready to pay for it, and even if they were, open source models just catch up.

        My feelings are that most of the "huge advancements" are not going to benefit the people selling AI.

        I'd put my money on those who sell the pickaxes, and the companies who have a way to use this new tech to deliver more value.

        • thinkharderdev 5 hours ago

          Yeah, I've always found it a bit puzzling how companies like OpenAI/Anthropic have such high valuations. Like what is the actual business model? You can sell inference-as-a-service of course but given that there are a half-dozen SOTA frontier models and the compute cost of inference is still very high it just seems like there is no margin in it. Nvidia captures so much value on the compute infrastructure and competition pushes prices down for inference and what is left?

        • Schiendelman 5 hours ago

          The people who make money serving in users will be the one with the best integrations. Those are harder to do, require business relationships, and are massively differentiating.

          You'll probably have a player that sells privacy as well.

      • OtherShrezzing 9 hours ago

        I don't see how this works, as the costs of running inference is so much higher than the revenues earned by the frontier labs. Anthropic and OpenAI don't continue to exist long-term in a world where GPT-5 and Claude 4.1 cost-quality models are SOTA.

        • HDThoreaun 9 hours ago

          With gpt5 I’m not sure this is true. Certainly openAI is still losing money but if they stopped research and just focused on productionizing inference use cases I think they’d be profitable.

          • criddell 5 hours ago

            But would they be profitable enough? They've taken on more than $50 billion of investment.

            I think it's relatively easy for Meta to plow billions into AI. Last quarter their revenue was something like $15 billion. Open AI will be lucky to generate that over the next year.

            • HDThoreaun 3 hours ago

              meta net profit last quarter was over $18 billion so yea the big tech players definitely have a lot more runway

          • epicureanideal 7 hours ago

            > if they stopped research and just focused on productionizing inference use cases I think they’d be profitable

            For a couple of years, until someone who did keep doing research pulled ahead a bit with a similarly good UI.

    • la64710 2 hours ago

      Correction if I may: Lot of AI jobs will disappear. Lot of usual jobs that were put on hold will return. This is good news for most of humankind.

    • throawaywpg 9 hours ago

      The line was to buy Amazon as it was undervalued a la IBM or Apple based on its cloud computing capabilities relative to the future (projected) needs of AI.

    • snihalani 5 hours ago

      When will the investors run out of money and stop funding hypes?

    • FrustratedMonky 5 hours ago

      "little shortsighted"

      Or, this knowingly could not be sustained. So they scooped up all the talent they wanted before anybody could react, all at once, with big carrots. And then hit pause button to let all that new talent figure out the next step.

    • baby 9 hours ago

      As someone using LLMs daily, it's always interesting to read something about AI being a bubble or just hype. I think you're going to miss the train, I am personally convinced this is the technology of our lifetime.

      • GoatInGrey 9 hours ago

        You are welcome to share how AI has transformed a revenue generating role. Personally, I have never seen a durable example of it, despite my excitement with the tech.

        In my world, AI has been little more than a productivity boost in very narrowly scoped areas. For instance, generating an initial data mapping of source data against a manually built schema for the individual to then review and clean up. In this case, AI is helping the individual get results faster, but they're still "doing" data migrations themselves. AI is simply a tool in their toolbox.

        • AnotherGoodName 8 hours ago

          What you've described is reasonable and a clear takeaway is that AI is a timesaving tool you should learn.

          Where i share concern with the parent is the claims that AI is useless which isn't coming from your post at all but i have definitely seen instances of it in the programmer community still to this day. As in the parents concern that some programmers are missing the train is unfortunately completely warranted.

          • gspencley 4 hours ago

            I went through the parents, looking for a claim somewhere that AI was "useless." I couldn't find it.

            Yes there are lots of skeptics amongst programmers when it comes to AI. I was one myself (and still am depending on what we're talking about). My skepticism was rooted in the fact that AI is trained on human-generated output. Most human written code is not very good, and so AI is going to produce not very good code by design because that's what it was trained on.

            Then you add to that the context problem. AI is not very good at understanding your business goals, or the nuanced intricacies of your problem domain.

            All of this pointed to the fact, very early on, that AI would not be a good tool to replace programmers. And THAT'S the crux of why so many programmers pushed back. Because the hype was claiming that automation was coming for engineering jobs.

            I have started to use LLMs regularly for a variety of tasks. Including some with engineering. But I always end up spending a lot of time refactoring what LLMs produce for me, code-wise. And much of the time I find that I"m still learning what the LLMs can do for me that truly saves me time, vs what would have been faster to just write myself in the first place.

            LLMs are not useless. But if only 20% of a programmer's time is actually spent writing code on average then even if you can net a 50% increase in coding productivity... you're only netting a 10% overall productivity optimization for an engineer BEST CASE SCENARIO.

            And that's not "useless" but compared to the hype and bullshit coming out of the mouths of CEOs, it's as good as useless. It's as good as the MIT study finding that only 5% of generative AI projects have netted ANY measurable returns for the business.

        • cm2012 8 hours ago

          I know a company that replaced their sales call center with an AI calling bot instead. The bot got better sales and higher feedback scores from customers.

          • rs186 8 hours ago

            And I happen to know a different company that regrets their decision to do something similar:

            https://tech.co/news/klarna-reverses-ai-overhaul

            Is my anecdotal evidence any better than yours?

            • hnfong 4 hours ago

              I'm going to interpret the two stories as "50% businesses find LLMs useful" (sample size 2).

          • sameermanek 8 hours ago

            I know a company that did the same and lost billions of dollars

      • agos 7 hours ago

        why is it a train? If it's so transformative surely I can join in in a year or so?

      • conartist6 9 hours ago

        I'll say it again since I've said it a million times, it can be useful and a bubble. The logic of investors before the last market crash was something like "houses are useful, so no amount of hype around the housing market could be a bubble"

        • Windchaser 9 hours ago

          Or, quite similarly, the internet bubble of the large ‘90s

          Very obviously the internet is useful, and has radically changed our lives. Also obviously, most of the high stock valuations of the ‘90s didn’t pan out.

      • skywhopper 4 hours ago

        How are you using it? The execs and investors believe the road to profit is by getting rid of your role in the process. Do you think that’d be possible?

        • baby 4 hours ago

          I'm an exec lol

      • eulers_secret 8 hours ago

        If you really think this, `baby` is an apt name! Internet, Smartphones, and social media will all be more impactful than LLMs could possibly be... but hey, if you're like 18 y/o then sure, maybe LLMs is the biggest.

        Also disagree with missing the train, these tools are so easy to use a monkey (not even a smart one like an ape, more like a Howler) can effectively use them. Add in that the tooling landscape is changing rapidly; ex: everyone loved Cursor, but now it's fallen behind and everyone loves Claude Code. There's some sense in waiting for this to calm down and become more open. (Why are users so OK with vendor lock-in??? It's bothersome)

        The hard parts are running LLMs locally (what quant do I use? K/V quant? Tradeoffs? Llama.cpp or ollama or vllm? What model? How much context can I cram in my vram? What if I do CPU inference? Fine tuning? etc..) and creating/training them.

        • baby 4 hours ago

          Tu quoque

    • hearsathought 9 hours ago

      > It almost seems as though the record-setting bonuses they were dolling out to hire the top minds in AI might have been a little shortsighted.

      If AI is going to be integral to society going forward, how is it shortsighted?

      > She did well by recommending investing in companies that invest in AI (like MS) but who had other profitable businesses (like Azure).

      So you prefer a 2x gain rather than 10X gain from the likes of Nvidia or Broadcom? You should check how much better META has done compared to MSFT the past few years. Also a "financial investment person"? The anecdote feels made up.

      > She skillfully navigated the question in a way that won my respect.

      She won your respect by giving you advice that led to far less returns than you could have gotten otherwise?

      > I personally believe that a lot of investment money is going to evaporate before the market resets.

      But you believe investing in MSFT was a better AI play than going with the "hype" even when objective facts show otherwise. Why should any care what you think about AI, investments and the market when you clearly know nothing about it?

  • TrackerFF 12 hours ago

    I really do wonder if any of those rock star $100m++ hires managed to get a 9-figure sign-on bonus, or if the majority have year(s) long performance clauses.

    Imagine being paid generational wealth, and then the house of cards comes crashing down a couple of months later.

    • gdbsjjdn 11 hours ago

      I'm sure everyone is doing just fine financially, but I think it's common knowledge that these kind of comp packages are usually a mix of equity and cash earned out over multiple years with bonuses contingent on milestones, etc. The eye-popping top-line number is insane but it's also unlikely to be fully realized.

      • epolanski 7 hours ago

        The point isn't doing fine financially, it's having left multi million dollar startups as founders.

        In essence, they have left stellar projects with huge money potential for the corporate rat race, albeit at important $.

        • MOARDONGZPLZ 4 hours ago

          They’re pretty sophisticated people and weighed the trades. It’s not as if they’re deserving of any sort of sympathy.

        • PKop 22 minutes ago

          If the AI bubble pops such that Meta's exorbitant AI spending is seen as a flop, how well would those startups have done?

        • jstummbillig 4 hours ago

          I think we are stretching the term "corporate rat race" a bit in this case.

      • taytus 7 hours ago

        >I'm sure everyone is doing just fine financially

        They are rich. Nobody is offered $100M+ comp if you are not already top 1% talent

        • Aurornis 4 hours ago

          $100mm comp packages are more like top 0.0001% to 0.00001% compensation.

    • krona 11 hours ago

      Taking rockstar players 'off the pitch' is the best way second-rate competitors can neutralize their opponents' advantage.

      • Aurornis 4 hours ago

        Nobody is paying $10mm to $100mm comp packages to bench people.

        They want an ROI. Taking them away from competitors is a side bonus.

      • ahi 10 hours ago

        Patrick Boyle on youtube has a good explanation of what's going on in the industry: https://youtu.be/3ef5IPpncsg?feature=shared

        tl;dw: some of it is anti-trust avoidance and some of it is knee-capping competitors.

      • boringg 10 hours ago

        Its a great way to kneecap collective growth and development.

        • chatmasta 10 hours ago

          So wage suppression is good because it’s better for the _collective_?

          • boringg 10 hours ago

            Wage suppression? Its the opposite were talking about here. Pay large amounts of money to make sure people don't work on challenging problems.

            But sure you cant try and argue that's wage suppression.

            • chatmasta 7 hours ago

              The comment I was responding to was implying that it would be better for the collective if Meta was not paying these exorbitant salaries. You said “it [paying high salaries] is a great way to kneecap collective growth and development.”

              In other words, you’re suggesting that _not_ paying high salaries would be good for collective growth and development.

              And if Meta is currently willing to pay these salaries, but didn’t for some reason, that would be the definition of wage suppression.

              • boringg 3 hours ago

                You gotta re-check your position. This is an extreme interpretation of wage suppression.

              • albedoa 6 hours ago

                Oh ya? If I am willing to pay my cleaner $350, but she only charges and accepted an offer of $200, I am engaging in the definition of wage suppression?

            • respondo2134 9 hours ago

              This is the Gavin Belson strategy to starve Pied Piper of distributed computing experts; nobody get's to work on his Signature Edition Box 3!

              • edm0nd 5 hours ago

                Fuck Banksy!

        • meesles 10 hours ago

          This has never been the goal of any business, despite what they say.

      • sgt101 10 hours ago

        We should also say that "being really lucky is the best way to make sure that other people don't have as much luck as you do"

      • ThrowawayTestr an hour ago

        Who would be the first-rate companies in this analogy?

    • KaiserPro 9 hours ago

      Its all in RSUs

      Supposedly, all people that join meta are on the same contract. They also supposedly all have the same RSU vesting schedules as well.

      That means that these "rockstars" will get a big sign on bonus (but its payable back inside 12 months if they leave) then ~$2m every 3 months in shares

      • toephu2 2 hours ago

        It's not even in RSUs. No SWEs/researchers are getting $100M+ RSU packages. Zuck said the numbers in the media were not accurate.

        If you still think they are, do you have any proof? any sources? All of these media articles have zero sources and zero proof. They just ran with it because they heard Sam Altman talk about it and it generates clicks.

    • toephu2 2 hours ago

      None of them are getting $100m+ packages. Zuck himself even debunked that myth. But the media loves to run with it because it generates clicks.

      • el_benhameen 2 hours ago

        I have no idea what’s going on behind the scenes, but Zuckerberg saying “nah that’s not true” hardly seems like definitive proof of anything.

    • lukeschlather 8 hours ago

      I have never heard of anyone getting a sign on bonus that was unconditional. When I have had signing bonuses they were owed back prorated if my employment ended for any reason in the first year.

      • Aurornis 4 hours ago

        I was a startup where someone got an unconditional signing bonus. It wasn't deliberate, they just kept it simple because it was a startup and they thought they trusted the guy because he was an old friend of the CEO.

        The guy immediately took leave to get some medical procedure done with a recovery time, then when he returned he quit for another job. He barely worked, collected a big signing bonus, used the company's insurance plan for a very expensive procedure, and then disappeared.

        From that point forward, signing bonuses had the standard conditions attached.

      • spjt 5 hours ago

        Are most people that money hungry? I wouldn't expect someone like Zuckerberg to understand, but if I ever got to more than a couple million dollars, I'm never doing anything else for the sake of making more money again.

        • MOARDONGZPLZ 4 hours ago

          This is a very weird take. Lots of people want to actively work on things that are interesting to them or impactful to the world. Places like Meta potentially give the opportunity to work on the most impactful and interesting things, potentially in human history.

          Setting that aside, even if the work was boring, I would jump at the chance to earn $100M for several years of white collar, cushy work, purely for the impact I could have on the world with that money.

    • torginus 10 hours ago

      I'm not an academic, but it kinda feels strange to me to stipulate in your contract that you must invent harder

    • nilkn 9 hours ago

      If we're actually headed for a "house of cards" AI crash in a couple months, that actually makes their arrangement with Meta likely more valuable, not less. Meta is a much more diversified company than the AI companies that these folks were poached from. Meta stock will likely be more resilient than AI-company stock in the event of an AI bubble bursting. Moreover, they were offered so much of it that even if it were to crash 50%, they'd still be sitting on $50M-$100M+ of stock.

      • malfist 3 hours ago

        A social media company is more diversified? Maybe compared to anthropic or openai, but not to any of the hyperscalers

    • indoordin0saur 10 hours ago

      I've heard of high 7-figure salaries but no 9 figure salaries. Source for this?

      • ben_w 10 hours ago

        "Why Andrew Tulloch Turned Down a $1.5 Billion Offer From Mark Zuckerberg" - https://techiegamers.com/andrew-tulloch-rejects-zuckerberg/

        • toephu2 2 hours ago

          "according to people familiar with the matter."

          aka, made up. They can make up anything by saying that. There are numerous false articles published by WSJ about Tesla also. I would take what they say here with a grain of salt. Zuck himself said the numbers in the media were widely exaggerated and he wasn't offering these crazy packages as reported.

        • Melatonic 2 hours ago

          That seems insane - even over 6 years. Maybe he was offered 1.5 billion in funding for the work itself ?

    • saubeidl 10 hours ago

      Must feel real good to get a golden ticket out of the bubble collapse when it's this imminent.

      • LevGoldstein 8 hours ago

        Is it imminent? Reading the article, the only thing that's actually changed is that the CEO has stopped hand-picking AI hires and has placed that responsibility on Alexandr Wang instead. The rest is just fluff to turn it into an article. The tech sector being down is happening in concert with the non-tech sector sliding too.

    • Starman_Jones 9 hours ago

      "The New Orleans Saints have signed Taysom Hill to a record $40M contract"

  • renecito 7 hours ago

    Mission accomplished: who'd tell disrupting your competition poaching their talent and erasing value (giving it away for free) would make people realize there is no long term value in the core technology itself.

    Don't get me wrong, we are moving to commoditization, as any new tech it'd be transparent to our lifestyle and a lot of money will be done as an industry, but it'd be hard to compete as a core business competence w/o cheating (and by cheating I mean your FANG company already has a competitive advantage)

    • isoprophlex 5 hours ago

      Whoa that's actually a brilliant strategy: accelerate the hype first by offering 100M comp packages, then stop hiring and strategically drop a few "yeah bubble's gonna pop soon" rumours. Great way to fuck with your competition, especially if you're meta and you're not in the lead yourself

      • 369548684892826 4 hours ago

        But if Meta believe it's a bubble then why not let the competition continue to waste their money pumping it up? How does popping it early benefit Meta?

  • me551ah 7 hours ago

    Scaling AI will require an exponential increase in compute and processing power, and even the current LLM models take up a lot of resources. We are already at the limit of how small we can scale chips and Moore’s law is already dead.

    So newer chips will not be exponentially better but will be more of incremental improvements, so unless the price of electricity comes down exponentially we might never see AGI at a price point that’s cheaper than hiring a human.

    Most companies are already running AI models at a loss, scaling the models to be bigger(like GPT 4.5) only makes them more expensive to run.

    The reason why internet, smartphones and computers have seen exponential growth from the 90s is due to underlying increase in computing power. I personally used a 50Mhz 486 in the 90s and now use a 8c/16t 5Ghz CPU. I highly doubt if we will see the same form of increase in the next 40 years

    • aaronblohowiak 6 hours ago

      We are either limited by compute, available training data, or algorithms. You seem to believe we are limited by compute. I've seen other people argue that we are limited by training data. It is my totally inexpert belief that we are substantially limited by algorithms at this point.

      • sarthaksingh99 an hour ago

        We are limited by both compute and available training data.

        If all we wanted was to train bigger and bigger models we have more than enough compute to last us for years.

        Where we lack compute is in scaling the AI to consumers. Current models take too much power and specialized hardware to be be profitable. If AI was able to improve your productivity by 20-30% percent but it costed you even 10% of your monthly salary, none would use it. I have used up $10 worth of credits using claude code in an hour multiple times. Assuming I use it continuously for 8 hours every day in a month, 10 * 8 * 24 = $1920. So its not that far off the current costs or running the models. If the size of the models scales faster than the speed of the inference hardware, the problem is only going to get worse.

        I too believe that we will eventually discover an algorithm that gives us AGI. The problem is that we cannot will a breakthrough. We can make one more likely by investing more and more into AI but breakthroughs and research in general by their nature are unpredictable.

        I think investing in new individual ideas is very important and gives us lot of good returns. Investing in a field in general hoping to see a breakthrough is a fool's errand in my opinion.

      • mattnewton 6 hours ago

        I think algorithms is a unique limit because it changes how much data or compute you need. For instance, we probably have the algorithms we need to brute force solving more problems today, but they require infeasible compute or data. We can almost certainly train a new 10T parameter mixture of experts that continues to make progress in benchmarks, but it will cost so much to train and be completely undeployable with today’s chips, data, and algorithms.

        So I think the truth is likely we are both compute limited and we need better algorithms.

        • aaronblohowiak 5 hours ago

          There are a few "hints" that suggest to me algorithms will bear a lot more fruit than compute (in terms of flops):

          1) there already exist very efficient algorithms for rigorous problems that LLMs perform terribly at! 2) learning is too slow and is largely offline 3) "llms aren't world models"

      • joegibbs an hour ago

        If the LLM is multimodal would more video and images improve the quality of the textual output? There’s a ton of that and it’s always easy to get more.

      • slashdev 2 hours ago

        I think we might also be limited by energy.

    • azinman2 6 hours ago

      > I highly doubt if we will see the same form of increase in the next 40 years

      People would have predicted this at 1GHZ. I wouldn’t discount anything about the future.

    • threecheese 6 hours ago

      We are a few months into our $bigco AI push and we are already getting token constrained. I believe we really will need massive datacenter rollouts in order to get to the ubiquity everyone says will happen.

    • czk 6 hours ago

      it only requires exponentially MORE money for linear returns!

  • byyoung3 10 hours ago

    clickbait. read the article. they just spent several billion hiring a leadership team. They are doing an all hands to figure out what they need to do.

    • simianwords 8 hours ago

      Its a bit frustrating that most don't read TFA instead vent out their AI angst the first opportunity they get.

    • spicybbq 7 hours ago

      Since "AI bubble" has become part of the discourse, people are watching for any signs of trouble. Up to this point, we have seen lots of AI hype. Now, more than in the past, we are going to see extra attention paid to "man bites dog" stories about how AI investment is going to collapse.

    • baby 4 hours ago

      The FUD surrounding Meta will never stop.

    • willsmith72 10 hours ago

      yes, because meta has no incentive to act like there's no bubble

      • Capricorn2481 10 hours ago

        So it's not clickbait, even though the headline does not reflect the contents of the article, because you believe the headline is plausible?

        I think AI is a bubble, but there's nothing in here that indicates they have frozen hiring or that Zuckerberg is cautious of a bubble. Sounds like they are spending even more money per researcher.

        • byyoung3 4 hours ago

          "there's nothing in here that indicates they have frozen hiring or that Zuckerberg is cautious of a bubble"

          That's what makes it clickbait my friend

  • sirodoht 5 hours ago

    > We are truly only investing more and more into Meta Superintelligence Labs as a company. Any reporting to the contrary of that is clearly mistaken.

    https://x.com/alexandr_wang/status/1958599969151361126?s=46

  • disposition2 2 hours ago

    I can see the value in actual AI. But it seems like in many instances how it is being utilized or applied is more related to terrible search functionality. Even for the web, it seems like we’re using AI to provide more refined search results, rather than just fixing search capabilities.

    Maybe it’s just easier to throw ‘AI’ (heavy compute of data) at a search problem, rather than addressing the crux of the problem…people not being provided with the tools to query information. And maybe that’s the answer but it seems like an expensive solution.

    That said, I’m not an expert and could be completely off base.

    • nomel 2 hours ago

      > is more related to terrible search functionality

      If you looked at $ spent/use case, I would think this is probably the bottom of the list, probably with the highest use of that being in the free tiers.

  • baxtr 8 hours ago

    Make a mistake once, it’s misjudgment. Repeat it, it’s incompetence?

    Meta nearly doubled its headcount in 2020 and 2021, assuming the pandemic growth would continue. However, Zuckerberg later admitted this was a mistake.

    • FlyingBears 4 hours ago

      So did everyone else ...

      • baxtr 3 hours ago

        Apple didn’t

    • HDThoreaun 2 hours ago

      When you hit on 12 in blackjack and go bust is it a mistake or a gamble? No one can tell the future.

  • dyeje 6 hours ago

    After reading Careless People and watching Meta’s metaverse and AI moves, Mark comes across as a child chasing the shiny new thing.

    • zmmmmm 2 hours ago

      it's not really a fair characterisation, because he persisted for nearly 10 years dumping enormous investment into the VR business, and still is to this day. Furthermore, Meta's AI labs predated all the hype and the company was investing and highly respected in the area way before it was "cool".

      If anything, I think the panic at this stage is arising from the sense of having his lunch stolen after having invested so much and for so long.

  • zmmmmm 2 hours ago

    The problem with sentiment driven market phenomena is they lack fundamental support. When they crash, they can really crash hard. And as much as I see real value in the progress in AI, 95% of the investment I see happening is all based on sentiment right now. Actually deploying AI into real operational scenarios to unlock the value everyone is talking about is going to take many years and it will look like a sink hole of cost well before that. Buckle up.

  • gusfoo 4 hours ago

    A trillion dollars of value disappearing in 2 days. We've still got our NFT metaverse shipping waybill project going on somewhere in the org chart, right? Phew!

    • lajetl 3 hours ago

      That's because it was never real to begin with. "Market cap" and "value" are not the same thing. "Value" is "I actually need this and it will dramatically improve my life". "Market cap" is "I can sell this to some idiot".

  • nabla9 12 hours ago

    Quality over quantity.

    Apparently its better to pay $100 million for 10 people than $1 million for 1000 people.

    • onlyrealcuzzo 12 hours ago

      1000 people can't get a woman to have a child faster than 1 person.

      So it depends on the type of problem you're trying to solve.

      If you're trying to build a bunch of Wendy's locations, it's clearly better to have more construction workers.

      It's less clear that if you're trying to build SGI that you're better off with 1000 people than 10.

      It might be! But it might not be, too. Who knows for certain til post-ex?

      • lelanthran 12 hours ago

        > 1000 people can't get a woman to have a child faster than 1 person.

        I always get slightly miffed about business comparisons to gestation: getting 9 women pregnant won't get you a child in 1 month.

        Sure, if you want one child. But that's not what business is often doing, now is it?

        The target is never "one child". The target is "10 children", or "100 children" or "1000 children".

        You are definitely going to overrun your ETA if your target is 100 children in 9 months using only 100 women.

        IOW, this is a facile comparison not worthy of consideration.[1]

        > So it depends on the type of problem you're trying to solve.

        This[1] is not the type of problem where the analogy applies.

        =====================================

        [1] It's even more facile in this context: you're looking to strike gold (AGI), so the analogy is trying to get one genius (160+ IQ) child. Good luck getting there by getting 1 woman pregnant at a time!

        • phkahler 11 hours ago

          >> Sure, if you want one child. But that's not what business is often doing, now is it?

          Your designing one thing. You're building one plant. Yes, you'll make and sell millions of widgets in the end but the system that produces them? Just one.

          Engineering teams do become less efficient above some size.

          • ethbr1 11 hours ago

            You'd think someone would have written a book on the subject.

            https://en.wikipedia.org/wiki/The_Mythical_Man-Month

          • Windchaser 8 hours ago

            >> Your designing one thing.

            You might well be making 100 AI babies, and seeing which one turns out to be the genius.

            We shouldn’t assume that the best way to do research is just through careful, linear planning and design. Sometimes you need to run a hundred experiments before figuring out which one will work. Smart and well-designed experiments, yes, but brute force + decent theory can often solve problems faster than just good theory alone.

          • Temporary_31337 10 hours ago

            I dare say that size is 3. Fight me ;)

        • earthnail 12 hours ago

          The analogy is a good analogy. It is used to demonstrate that a larger workforce doesn’t always automatically give you better results, and that there is a set of problems that are clear to identify a priori where that applies. For some problems, quality is more important than quantity, and you structure your org respectively. See sports teams, for example.

          In this case, you want one foundation model, not 100 or 1000. You can’t afford to build 1000. That’s the one baby the company wants.

          • lelanthran 11 hours ago

            > In this case, you want one foundation model, not 100 or 1000. You can’t afford to build 1000. That’s the one baby the company wants.

            I am going to repeat the footnote in my comment:

            >> [1] It's even more facile in this context: you're looking to strike gold (AGI), so the analogy is trying to get one genius (160+ IQ) child. Good luck getting there by getting 1 woman pregnant at a time!

            IOW, if you're looking for specifically for quality, you can't bet everything on one horse.

            • ethbr1 11 hours ago

              You're ignoring that each foundation model requires sinking enormous and finite resources (compute, time, data) into training.

              At some point, even companies like Meta need to make a limited number of bets, and in cases like that it's better to have smarter than more people.

        • amoss 9 hours ago

          Ironically, rather than being facile the point of the comparison is to explain https://en.wikipedia.org/wiki/Amdahl%27s_law to people who are clearly not familiar with it.

        • coffeebeqn 9 hours ago

          Ah the new strategy - hire one rockstar woman who can gestate 1000 babies per year for $100 mil!

      • fragmede 35 minutes ago

        At the scale we're talking about though, if you need a baby in one month, you need 12,000 women. With that many women, the math says you should have a woman that's already 8 months pregnant, and you'll have a baby in 1 month.

      • skywhopper 10 hours ago

        In re Wendy’s, it depends on whether you have a standard plan for building the Wendy’s and know what skills you need to hire for. If you just hire 10,000 random construction workers and send them out with instructions to “build 100 Wendy’s”, you are not going to succeed.

    • tim333 10 hours ago

      One person who's figured how to make ASI is more useful than a bunch that haven't. Not sure that actually applies anywhere.

      • coffeebeqn 9 hours ago

        It’s me. I’ve figured it out. Who’s got the offer letter so I can start?

    • butlike 11 hours ago

      I'd rather pay $0 to n people if all they're going to do is make vibe-coded dogshit that spins it's wheels and loses context all the time.

    • doctorpangloss 10 hours ago

      The reason they paid $100m for “one person” is because it was someone people liked to work for, which is why this article is a big deal.

  • 42lux 13 hours ago

    What I don't get is that they are gunning for the people that brought us the innovations we are working with right now. How often does it happen that someone really strikes gold a second time in research at such a high level? It's not a sport.

    • gdbsjjdn 11 hours ago

      You're falling victim to the Gambler's Fallacy - it's like saying "the coin just flipped heads, so I choose tails, it's unlikely this coin flips heads twice in a row".

      Realistically they have to draw from a small pool of people with expertise in the field. It is unlikely _anyone_ they hire will "strike gold", but past success doesn't make future success _less_ likely. At a minimum I would assume past success is uncorrelated with future success, and at best there's a weak positive correlation because of reputation, social factors, etc.

    • kkoncevicius 12 hours ago

      Even if they do not strike gold the second time, there can still be a multitude of reasons:

        1. The innovators will know a lot about the details, limitations and potential improvements concerning the thing they invented.
        2. Having a big name in your research team will attract other people to work with you.
        3. I assume the people who discovered something still have a higher chance to discover something big compared to "average" researchers.
        4. That person will not be hired by your competition.
      • cma 12 hours ago

        5. Having a lot of very publicly extremely highly paid people will make people assume anyone working on AI there is highly paid, if not quite as extreme. What most people who make a lot of money spend it on is wealth signalling, and now they can get a form of that without the company having to pay them as much.

        • stogot 12 hours ago

          What good is a wealth signal without wealth?

          You’re promoting vacuous vanity

          • andai 11 hours ago

            Higher status, access to higher quality mates, etc.

            • cma 10 hours ago

              It might even play role in getting you to US President

          • cma 10 hours ago

            > promoting

            Where?

    • this_user 12 hours ago

      Who else would you hire? With a topic as complex as this, it seems most likely that the people who have been working at the bleeding edge for years will be able to continue to innovate. At the very least, they are a much safer bet than some unproven randos.

      • Closi 9 hours ago

        Exactly this - people that understood the field well enough to add new knowledge to it has to be a pretty decent signal for a research-level engineer.

        At the research level it’s not just about being smart enough, or being a good programmer, or even completely understanding the field - it’s also about having an intuitive understanding of the field where you can self pursue research directions that are novel enough and yield results. Hard to prove that without having done it before.

    • croes 12 hours ago

      Because the innovations fail to deliver what was promised and the overall costs are higher than the outcome

    • MichaelRazum 13 hours ago

      How about Ilya

      • Agraillo 10 hours ago

        Reworded from [1]: Earlier this year Meta tried to acquire Safe Superintelligence. Sutskever rebuffed Meta’s efforts, as well as the company’s attempt to hire him

        [1] https://www.cnbc.com/2025/06/19/meta-tried-to-buy-safe-super...

      • empiko 12 hours ago

        What about him?

        • MichaelRazum 10 hours ago

          Alexnet, AlphaGo, ChatGPT Would argue he did strike gold few times.

          • empiko 9 hours ago

            I don't follow him very closely. Was he important for these projects?

        • koakuma-chan 12 hours ago

          Right, what about him? Didn't he start his own company and raised 1 billion a while ago? I haven't heard about them since then.

          • scrollop 11 hours ago

            Didn't he say their goal is AGI and they will not produce any products until then.

            I admire that, in this era where CEOs tend to HYPE!! To increase funding (looking at a particular AI company...)

            • koakuma-chan 9 hours ago

              > Didn't he say their goal is AGI and they will not produce any products until then.

              Did he specify what AGI is? xD

              > I admire that, in this era where CEOs tend to HYPE!! To increase funding (looking at a particular AI company...)

              I think he was probably hyping too, it's just that he appealed to a different audience. IIRC they had a really plain website, which, I think, they thought "hackers" would like.

  • beambot 34 minutes ago

    Moving fast or sheep-like behavior?

  • zeristor 9 hours ago

    Didn't Meta invest big into the Metaverse, then back track on that, was it $20 billion.

    I'd like for these investments to pay off, they're bold but it highlights how deep the pockets are to be able to invest so much.

    • rtkwe 8 hours ago

      They didn't just invest they made it core to their identity with the name change and it just fell so so flat because the claims were nonsense hype for crypto pumps. We already had stuff like VR Chat (still going pretty strong) it just wasn't corporate and sanitized for sale and mass monetization.

    • bemmu 8 hours ago

      They're still on it though. The new headset prototypes with high FOV sound amazing, and they are iterating on many designs.

      They're already doing something like ~$500M/year in Meta Quest app sales. Granted not huge yet after their 30% cut, but sales should keep increasing as the headsets get better.

      • benzible 5 hours ago

        That'll pay for 50% of an AI researcher!

    • coffeebeqn 9 hours ago

      They did indeed - see their current name

    • jazzyjackson 8 hours ago

      They spent billions on GPUs and were well positioned to enter the LLM wars

    • baby 4 hours ago

      When did they backtrack?

    • HDThoreaun 2 hours ago

      I havent seen any evidence that meta is backtracking on VR. Theyve got more than enough money to focus on both, in fact they probably need to. Gen AI is a critical complement of the metaverse. Without gen ai metaverse content is too time consuming to make.

    • eep_social 9 hours ago

      > was it $20 billion

      more like 40, yes

  • ideamotor 10 hours ago

    It sounds like a lot of these big companies are being managed by LLMs and vibes at this point.

    • pas 10 hours ago

      > vibes

      always has been

      (and there's comfort in numbers, no one got fired for buying IBM, etc..)

    • baby 4 hours ago

      Def. managed by vibes, but any company that tell you they're not is basically bullshiting

    • blueboo 10 hours ago

      They’d probably be doing significantly better if they were LLM-guided

  • alex1138 8 hours ago

    Metaverse (especially) or AI might make more sense if you could actually see your friend's posts (and vice versa), if the feed made sense (which it hasn't for years now) and if you could message people who you aren't friends with yet without it getting lost in some 'other' folder you won't discover until 3 years from now (Gmail has a Spam folder problem... but the difference is you can see you have messages there and you can at least check it out for yourself)

    What I'm trying to say is make your product the barest minimum usable first maybe? (Also, don't act like, as Jason Calacanis has called it, a marauder, like copying everything from everyone all the time. What he's done with Snapchat is absolutely tasteless and in the case of spying on them - which he's done - very likely criminal)

  • YeGoblynQueenne 12 hours ago

    >> Mr Zuckerberg has said he wants to develop a “personal superintelligence” that acts as a permanent superhuman assistant and lives in smart glasses.

    Yann Le Cun has spoken about this, so much that I thought it was his idea.

    In any case, how's that going to work? Is everyone going to start wearing glasses? What happens if someone doesn't want to wear glasses?

    • butlike 10 hours ago

      I don't want to come across as a shill, but I think superintelligence is being used here because the end result is murky and ill-defined at this point.

      I think the concept is like: "a tool that has the utility of a 'personal assistant' so much so that you wouldn't have to hire one of those." (Not so much that the "superintelligence" will mimicry a human personal assistant).

      Obviously this is just a guess though

    • righthand 11 hours ago

      Every time you ask it a question you need to cool it off by pouring a bottle of water on your head.

    • lionkor 10 hours ago

      It just won't happen.

    • techpineapple 11 hours ago

      “ In any case, how's that going to work? Is everyone going to start wearing glasses? What happens if someone doesn't want to wear glasses?”

      People probably said the same thing about “what if someone doesn’t want to carry a phone with them everywhere”. If it’s useful enough the culture will change (which, I unequivocally think they won’t be, but I digress)

      • butlike 10 hours ago

        "grills" are going to come back in a big way

      • mindslight 10 hours ago

        Very few will not want to wear the glasses.

        https://memory-alpha.fandom.com/wiki/The_Game_(episode)

        Last night I had a technical conversation with ChatGPT that was so full of wild hallucinations at every step, it left me wondering if the main draw of "AI" is better thought of as entertainment. And whether using it for even just rough discovery actually serves as a black hole for the motivation to get things done.

        • dghlsakjg 8 hours ago

          I'm actually a little shocked that AI hasn't been integrated into games more deeply at this point.

          Between whisper and lightweight tuned models, it wouldn't be super hard to have onboard AI models that you can interact with in much more meaningful ways that we have traditionally interacted with NPCs.

          When I meet an NPC castle guard, it would be awesome if they had an LLM behind it that was instructed to not allow me to pass unless I mention my Norse heritage or whatever.

          • techpineapple 5 hours ago

            I imagine tuning this to remain fun would be a real challenge.

        • techpineapple 5 hours ago

          “ if the main draw of "AI" is better thought of as entertainment.”

          Crazy is true, but that would somewhat follow most tech advancements right?

    • saubeidl 10 hours ago

      I think Mr Zuckerberg greatly underestimates how toxic his brand is. No way I want to become a borg for the "they just trust me, dumb fucks" guy.

      • coro_1 9 hours ago

        The META rebrand was pretty brilliantly done. The makeover far outweighs this sort of sentiment for now.

  • caycep 5 hours ago

    Dear lord can Meta hiring be any more unstable? HR dept must be a revolving door at this point

    • awalsh128 5 hours ago

      I got an email recently from a Meta recruiter of I'm interested in a non technical leadership position. I'm a programmer.

  • shubik22 12 hours ago

    This seems to just be a rewrite of https://www.wsj.com/tech/ai/meta-ai-hiring-freeze-fda6b3c4. Can we replace the link?

  • seydor 11 hours ago

    That's a man with conviction

    • JKCalhoun 10 hours ago

      Sorry, I was in the metaverse just now. I took my headset off though — could you please repeat that?

      • loloquwowndueo 10 hours ago

        Haven’t been there in a while, did they figure out how to give people, I don’t know, legs and stuff?

      • iphone_elegance 7 hours ago

        the metaverse push is the perfect analogy

        cool fun concepts/technology fucked by the worlds most boring people who only have desire to dominate markets and attention.. god forbid anything happen slowly/gradually without it being about them

        • baby 4 hours ago

          Fucked? Have you tried the latest Quest3 experience? It would be nowhere near this if it was not for Meta and other big corps.

          Second, did you see the amount of fun content on the store? It's insane. People who are commenting on the Quest have obviously never even opened the app store there.

  • lvl155 10 hours ago

    How did he run out of money so fast? Think Zuck is one of those guys who get sucked into hype cycles and no one around him will tell him so. Even investors.

  • bilsbie 10 hours ago

    I’ve never seen so much evidence for a bubble yet so much potential to be the biggest Thing ever.

    Just getting a lot of mixed signals right now. Not sure what to think.

    • kilroy123 9 hours ago

      Personally, I think it's both! It's a bubble, but it's also going to be something that slowly but steadily transforms the world in the next 10-20 years.

      • zmmmmm 2 hours ago

        People seem very confused thinking that something can't both be valuable AND a bubble.

        Just look at the internet. The dot com bubble was one of the most widely recognised bubbles in history. But would you say the internet was a fad that went away? That there was no value there?

        There's zero contradiction at all in it being both.

      • Insanity 3 hours ago

        We might see another AI winter first, is my assumption. I believe that LLMs are fundamentally the wrong approach to AGI, and that bubble is going to burst until we have a better methodology for AGI.

        Unfortunately, the major players seem focused on 'getting to AGI pretention through LLM'.

    • jug 5 hours ago

      Yeah, it truly IS transformative for industries, no denying anymore at this point. What we have will remain even after a pop. But I think AI was special in how there were massive improvements the more compute you threw at it for years. But then we ran out of training material and suddenly things got much harder. It’s this ramping up of investments to spearhead transformative tech and suddenly someone turns off the tap that makes this so conflicted. I think.

    • crims0n 9 hours ago

      Dot-com was the same way... the Internet did end up having the potential everyone thought it would, businesses just didn't handle the influx of investment well.

    • toephu2 2 hours ago

      I think the internet is bigger than AI.

      Without the internet there is no AI.

    • TheOtherHobbes 9 hours ago
    • coffeebeqn 9 hours ago

      There is potential but it does seem like just throwing more money at LLMs is not going to get us to where the bubble expects

    • cuckerberg432 8 hours ago

      ... people said the same thing about the "metaverse" just a few years ago. "You know people are gonna live their entire lives in there! It's gonna change everything!" And 99% of people who heard that laughed and said "what are you smoking?" And I say the same thing when I hear people talk about "the AI machine god!"

  • roxolotl 13 hours ago

    Maybe this time investors will realize how incompetent these leaders are? How do you go from 250mil contracts to freezes in under a month?

    • onlyrealcuzzo 12 hours ago

      I really don't understand this massive flip flopping.

      Do I have this timeline correct?

      * January, announce massive $65B AI spend

      * June, buy Scale AI for ~$15B, massive AI hiring spree, reportedly paying millions per year for low-level AI devs

      * July, announce some of the biggest data centers ever that will cost billions and use all of Ohio's water (hyperbolic)

      * Aug, freeze, it's a bubble!

      Someone please tell me I've got it all wrong.

      This looks like the Metaverse all over again!

      • abxyz 12 hours ago

        The bubble narrative is coming from the outside. More likely is that the /acquisition/ of Scale has led to an abundance of talent that is being underutilised. If you give managers the option to hire, they will. Freezing hiring while reorganising is a sane strategy regardless of how well you are or are not doing.

        • prasadjoglekar 12 hours ago

          This. TFA says this explicitly. Alexander Wang, the former Scale CEO is to approve any new hires.

          They're taking stock of internal staff + new acquisitions and how to rationalize before further steps.

          Now, I think AI investments are still a bubble, but that's not why FB is freezing hiring.

          • apwell23 12 hours ago

            > They're taking stock of internal staff + new acquisitions and how to rationalize before further steps.

            Like a toddler collecting random toys in a pile and then deciding what to do with them.

            • prasadjoglekar 12 hours ago

              Perhaps. But more like, there's a new boss who wants to understand the biz before doing any action. I've done this personally at a much smaller scale of course.

        • ml-anon 11 hours ago

          "abundance of talent" is not something I'd ascribe to Scale.

          • torginus 10 hours ago

            Yeah, Scale was Amazon's Mechanical Turk for the AI era.

        • JKCalhoun 10 hours ago

          Better strategy of course is to quietly freeze hiring. Perhaps that is not an option for a publicly traded company though.

          • dylan604 9 hours ago

            You could just keep having interviews yet never actually hire anyone based on the talent pool is wide but shallow. It results in the same as a freeze, but without the negative connotation to the company while shifting it to the workforce

            • colinsane 5 hours ago

              wow there's really _zero_ sense of mutual respect in this industry isn't there. it's all just "let's make a buck by being total assholes to everyone around us".

            • varjag 8 hours ago

              With an employer the size of Meta people would get the clue fairly quick - with inevitable public backlash.

        • yifanl 10 hours ago

          The bubble narrative has been ongoing for a while, but as I understand it, the extremely disappointing response to GPT-5 has spilled things over.

      • fartfeatures 12 hours ago

        Maybe they are poisoning the well to slow their competitors? Get the funding you need secured for the data centers and the hiring, hire everyone you need and then put out signals that there is another AI winter.

        • Andrex 11 hours ago

          That could eventually screw them over too if they're not careful. That also ascribes cleverness to Meta's C-suit which I don't think exists.

        • Temporary_31337 9 hours ago

          occam's razor

      • butlike 10 hours ago

        The scale they operate at makes the billions, bucks.

        As a board member, I'd rather see a billion-dollar bubble test than a trillion-dollar mistake.

        • JKCalhoun 10 hours ago

          True, and after having just leaned heavy into the "metaverse" I expect they're twice shy now.

      • randycupertino 11 hours ago

        Zuckerberg's leadership style feels very reactionary and arrogant, defined by flailing around for the new fad and new hyped thing, scrapping everything that when the current obsession doesn't work out and then sticking head in the sand about abandoned projects and ignoring subsequent whiplash.

        Remember when he pivoted the entire company to the meta-verse and it was all about avatars with no legs? And how proud they trumpeted when the avatars were "now with legs!!" but still looked so pathetic to everyone not in his bubble. Then for a while it was all about Meta glasses and he was spamming those goofy cringe glasses no one wants in all his instagram posts- seriously if you check out his insta he wears them constantly.

        Then this spring/summer it was all about AI and stealing rockstar ai coders from competitors and pouring endless money into flirty chatbots for lonely seniors. Now we have some bad press from that and realizing that isn't the panacea we thought it was so we're in the the phase where this is languishing so in about 6 months we'll abandon this and roll out a new obsession that will be endlessly hyped.

        Anything to distract from actually giving good stewardship and fixing the neglect and stagnation of Meta's fundamental products like facebook and insta. Wish they would just focus on increasing user functionality and enjoyment and trying to resolve the privacy issues, disinformation, ethical failures, social harm and political polarization caused by his continued poor management.

        • butlike 10 hours ago

          > Anything to distract from actually giving good stewardship and fixing the neglect and stagnation of Meta's fundamental products like facebook and insta.

          DONT TOUCH THE MONEY-MAKER(S)!!!!

        • Andrex 11 hours ago

          > Zuckerberg's leadership style feels very reactionary and arrogant, defined by flailing around for the new fad and new hyped thing, scrapping everything that when the current obsession doesn't work out and then sticking head in the sand about abandoned projects and ignoring subsequent whiplash.

          Maybe he's like this because the first few times he tried it, it worked.

          Insta threatening the empire? Buy Insta, no one really complains.

          Snapchat threatening Insta? Knock off their feature and put it in Insta. Snap almost died.

          The first couple times Zuckerberg threw elbows he got what he wanted and no one stopped him. That probably influenced his current mindset, maybe he thinks he's God and all tech industry trends revolve around his company.

        • butlike 10 hours ago

          As a DJ, I kinda want the glasses to shoot first-person video :(

      • ozgung 11 hours ago

        By Amara's Law and Gartner Hype cycle every technological breakthrough looks like a bubble. Investors and technologist should already know that. I don't know why they're acting like altcoins in 2021.

        • Andrex 11 hours ago

          1 breakthrough per 99 bubbles would make anyone cautious. The rule should be to assume a bubble is happening by default until proven otherwise by time.

          • butlike 10 hours ago

            That's actually how you create a death spiral for your company. You have to assume 'growth' and not 'death'. 'life' over 'lost'. 'flourishing' over 'withering'. That you're strong enough to survive.

            • JKCalhoun 10 hours ago

              Apple has seemingly done well waiting until they see a clear consumer direction (and woefully underserved tech there).

              • butlike 10 hours ago

                That's not playing into a bubble, that's creating a product for a market. You could also argue the Apple Vision is a misplay, or at least premature.

                They've also arrogantly gone against consumer direction time and time again (PowerPC, Lightning Ports, no headphone jack, no replaceable battery, etc.)

                And finally, sometimes their vision simply doesn't shake out (AirPower)

                • JKCalhoun 9 hours ago

                  Oh, yeah — Apple Vision is a complete joke. I'm an Apple apologist to a degree though so I can rationalize all their missteps. I won't deny though they have had many though.

      • boxed 12 hours ago

        The most amusing has to be then Zuckerburg publishes his "thoughts" about how he's betting 100% on AI... written underneath the logo "Meta".

        • stogot 12 hours ago

          Zuck’s metaverse will be populated by AI characters running in the $65b manhattan data center

          The MAU metric must continue to go up, and no one will know if it’s human or NPC

          • JKCalhoun 10 hours ago

            The people "gooning" over Grok's Ani apparently can't wait to take their girlfriends there. ;-)

    • steve1977 12 hours ago

      IMHO Mark Zuckerberg is a textbook case of someone who got lucky once by basically being in the right place at the right time, but who attributes his success to his skill. There’s probably a proper term for this.

      • UncleMeat 12 hours ago

        I think that meta is bad for the world and that zuck has made a lot of huge mistakes but calling him a one hit wonder doesn't sit right with me.

        Facebook made the transition to mobile faster than other competitors and successfully kept G+ from becoming competition.

        The instagram purchase felt insane at the time ($1b to share photos) but facebook was able to convert it into a moneymaking juggernaut in time for the flattened growth of their flagship application.

        Zuck hired Sheryl Sandburg and successfully turned a website with a ton of users into an ad-revenue machine. Plenty of other companies struggled to convert large user bases into dollars.

        This obviously wasn't all based on him. He had other people around him working on this stuff and it isn't right to attribute all company success to the CEO. The metaverse play was obviously a legendary bust. But "he just got lucky" feels more like Myspace Tom than Zuckerberg in my mind.

        • alpha_squared 10 hours ago

          No one else is adding the context of where things were at the time in tech...

          > The instagram purchase felt insane at the time ($1b to share photos) but facebook was able to convert it into a moneymaking juggernaut in time for the flattened growth of their flagship application.

          Facebook's API was incredibly open and accessible at the time and Instagram was overtaking users' news feeds. Zuckerberg wasn't happy that an external entity was growing so fast and onboarding users so easily that it was driving more content to news feeds than built-in tools. Buying Instagram was a defensive move, especially since the API became quite closed-off since then.

          Your other points are largely valid, though. Another comment called the WhatsApp purchase "inspired", but I feel that also lacks context. Facebook bought a mobile VPN service used predominantly by younger smartphone users, Onavo(?), and realized the amount of traffic WhatsApp generated by analyzing the logs. Given the insight and growth they were monitoring, they likely anticipated that WhatsApp could usurp them if it added social features. Once again, a defensive purchase.

          • UncleMeat 7 hours ago

            I don't think we can really call the instagram purchase purely defense. They didn't buy it and then slowly kill it. They bought it and turned it into a product of comparable size to their flagship with sustained large investment.

        • librasteve 12 hours ago

          Whatsapp was also an inspired move

          • alex1138 an hour ago

            Yeah and also much deserving of antitrust

        • MrMember 11 hours ago

          I hate pretty much everything about Facebook but Zuckerberg has been wildly successful as CEO of a publicly traded company. The market clearly has confidence in his leadership ability, he effectively has had sole executive control of Facebook since it started and it's done very well for like 20 years now.

          • Barrin92 9 hours ago

            >has been wildly successful as CEO of a publicly traded company.

            That has a lot to do with the fact that it's a business centric company. His acumen has been in user growth, monetization of ads, acquisitions and so on. He's very similar to Altman.

            The problems start when you try to venture into hard technological topics, like the Metaverse fiasco, where you have to have a sober and engineering oriented understanding of the practical limits of technology, like Carmack who left Meta pretty frustrated. You can't just bullshit infinitely when the tech and not the sales matter.

            Contrast it with Gates who had a serious programming background, he never promised even a fraction of the cringe worthy stuff you hear from some CEOs nowadays because he would have known it's nonsense. Or take Apple, infinitely more sane on the AI topic because it isn't just a "more users, more growth, stonks go up" company.

        • stogot 12 hours ago

          Buying competitors is not insane or a weird business practice. He was probably advised to do so by the competent people under him

          And what did he do to keep G+ from becoming a valid competitor? It killed itself. I signed up but there was no network effect and it kind of sucked. Google had a way of shutting down all their product attempts too

          • smugma 11 hours ago

            If you read Internal Tech Emails (on X), you’ll see that he was the driving force behind the key acquisitions (successes as well as failures such as Snap).

          • UncleMeat 11 hours ago

            I am also not saying that zuck is a prescient genius who is more capable than other CEOs. I am just saying that it doesn't seem correct to me to say that he is "a textbook case of somebody who got lucky once."

      • arcticbull 12 hours ago

        He's really not. Facebook is an extremely well run organization. There's a lot to dislike about working there, and there's a lot to dislike about what they do, but you cannot deny they have been unbelievably successful at it. He really is good at his job, and part of that has been making bold bets and aggressively cutting unsuccessful bets.

        • onlyrealcuzzo 11 hours ago

          Facebook can be well run without that being due to Zuck.

          There are literally books that make this argument from insider perspectives (which doesn't mean it's true, but it is possible, and does happen regularly).

          A basketball team can be great even if their coach sucks.

          You can't attribute everything to the person at the top.

          • smugma 11 hours ago

            That is true but in Meta’s case, it is tightly managed by him. I remember a decade ago a friend was a mid-level manager and would have exec reviews to Zuck, who could absorb information very quickly and redirect feedback to align with his product strategy.

            He is a very hands CEO, not one who is relying on experts to run things for him.

            In contrast, I’ve heard that Elon has a very good senior management team and they sort of know how to show him shiny things that he can say he’s very hands on about while they focus on what they need to do.

          • Jensson 8 hours ago

            He created the company, if it is well run it was thanks to him hiring the right people. Regardless how you slice it he is a big reason it didn't fail, most companies like that fails when they scale up and hire a lot of people but facebook didn't, hiring the right people is not luck.

        • rs186 12 hours ago

          hmm... Oculus Quest something something.

          • Esophagus4 12 hours ago

            I can’t tell if you’re being tongue in cheek or not, so I’ll respond as if you mean this.

            It’s easy to cherry pick a few bets that flopped for every mega tech company: Amazon has them, Google has them, remember Windows Phone? etc.

            I see the failures as a feature, not a bug - the guy is one of the only founder CEOs to have ever built a $2T company (trillion with a T). I imagine part of that is being willing to make big bets.

            And it also seems like no individual product failure has endangered their company’s footing at all.

            While I’m not a Meta or Zuck fan myself, using a relatively small product flop as an indication a $2T tech mega corp isn’t well run seems… either myopic or disingenuous.

            • rs186 11 hours ago

              Parent comment says "aggressively cutting unsuccessful bets" and Oculus is nothing like that.

              Oculus Quest are decent products, but a complete flop compared to their investment and Zuck's vision of the metaverse. Remember they even renamed the company? You could say they're on betting on the long run, but I just don't see that happening in 5 or even 10 years.

              As an owner of Quest 2 and 3, I'd love to be proven wrong though. I just don't see any evidence of this would change any time soon.

              • patapong 11 hours ago

                The VR venture can also be seen as a huge investment in hard tech and competency around issues such as location tracking and display tech for creating AI-integrated smartglasses, which many believe is the next gen AI interface. Even if the current headsets or form factor do not pay off, I think having this knowledge coud be very valuable soon.

              • Esophagus4 11 hours ago

                I don’t think their “flops” of Oculus or Metaverse have endangered their company in any material way, judging by their stock’s performance and the absurd cash generating machine they have.

                Even if they aren’t great products or just wither into nothing, I don’t think we will be see a HBS case study in 20 years saying, “Meta could have been a really successful company, but were it for their failure in these two product lines”

        • thrown-0825 12 hours ago

          laughs in metaverse

          • arcticbull 12 hours ago

            Absolutely, not everything they do will succeed but that's okay too, right? At this point their core products are used by 1 in 2 humans on earth. They need to get people to have more kids to expand their user base. They're gonna throw shit at the wall and not everything will stick, and they'll ship stuff that's not quite done, but they do have to keep trying; I can't bring myself to call that "failure."

            • thrown-0825 12 hours ago

              their core product IS 1 of 2 humans on earth.

              the product is used by advertisers to sell stuff to those humans.

              • butlike 10 hours ago

                So you're saying the ad revenue sort of allows them to "be their own bank?"

                Then they can bankroll their own new entrepreneurial ideas risk-free, essentially.

                • thrown-0825 9 hours ago

                  They are a publicly traded company, shareholders are their bank.

                  • butlike 7 hours ago

                    What's a buyback?

          • billy99k 12 hours ago

            You laugh, but the Oculus is amazing. I use it as part of my daily workouts.

            • rs186 11 hours ago

              I agree, but that does not make Oculus a commercially successful and viable product. They are still bleeding cash on it, and VR is not going mainstream any time soon.

            • thrown-0825 12 hours ago

              fuck oculus and lucky palmer.

              I have hundreds of hours building and tinkering on the original kickstarter kit and then they sold to FB and shut down all the open source stuff.

      • breitling 12 hours ago

        To be fair, buying Instagram so early + WhatsApp were great moves too.

        • beagle3 12 hours ago

          But they were less “skill” and more “surveillance”. He had very good usage statistics (which he shouldn’t have had) of these apps through Onavo - a popular VPN app Facebook bought for the purpose of spying on what users are doing outside Facebook.

          • disgruntledphd2 12 hours ago

            Instagram was acquired before Onavo.

            • beagle3 9 hours ago

              That’s true, but I remember rumors at the time that Onavo already supplied analytics to FB at the time of instagram purchase.

              Google gave me a paywalled link to FTCWatch that supposedly has the details, but I can’t check.

        • tempusalaria 12 hours ago

          WhatsApp is certainly worth less today than what they paid for it plus the extra funding it has required over time. Let alone producing anything close to ROI. Has lost them more money than the metaverse stuff.

          Insta was a huge hit for sure but since then Meta Capital allocation has been a disaster including a lot of badly timed buybacks

        • JKCalhoun 10 hours ago

          Typically when a company is flush with cash, acquisitions become an obvious place to put that money.

      • sumedh 12 hours ago

        Except a company like Google with all its billions could not compete with Facebook. o Mark did something right.

        • spicyusername 12 hours ago

          Or it's really hard to overcome network effects, no matter how good your product is.

          • sumedh 12 hours ago

            You can do a lot of things with billion dollars and when you have the biggest email service.

      • aleph_minus_one 12 hours ago

        > IMHO Mark Zuckerberg is a textbook case of someone who got lucky once by basically being in the right place at the right time, but who attributes his success to his skill.

        It is no secret that the person who turned Facebook into a money-printing machine is/was Sheryl Sandberg.

        Thus, the evidence is clear that Mark Zuckerberg had the right idea at the right time (the question is whether this was because of his skills or because he got lucky), but turning his good idea(s) into a successful business was done by other people (lead by Sheryl Sandberg).

        • Esophagus4 11 hours ago

          And isn’t the job of a good CEO to put the right people in the right seats? So if he found a superstar COO that took the company into the stratosphere and made them all gazillionaires…

          Wouldn’t that indicate, at least a little bit, a great management move by Zuck?

        • butlike 10 hours ago

          I mean, there's also a reason the board hasn't ousted him.

        • apwell23 12 hours ago

          it wasn't sheryl

          • aleph_minus_one 12 hours ago

            Who was it then?

            • butlike 10 hours ago

              It was YOU! As an IC, remember: you're one of the most valuable assets at Meta! Without you, we couldn't build cool products like...

              etc. etc.

      • konart 11 hours ago

        >IMHO Mark Zuckerberg is a textbook case of someone who got lucky once by basically being in the right place at the right time

        How many people also where at the right place and right time and were lucky then went bankrupt or simply never made it this high?

      • MagicMoonlight 11 hours ago

        And he didn't even come up with the idea, he stole it all. And then he stole the work from the people he started it with...

        • alex1138 8 hours ago

          You're probably going to get comments like "Social networking existed before. You can't steal it". Well, on top of derailing someone else's execution of said non-stole idea (or something) which makes you a jerk, in the case of those he 'stole'/stole from, for starters maybe it was existing code (I don't know if that was ever proven), but maybe it was also the Winklevosses idea of using .edu email addresses, and possibly other concepts

          Do I think he stole it? Dunno. (Though Aaron Greenspan did log his houseSYSTEM server requests, which seems pretty damning) But given what he's done since (Whatsapp, copying every Snapchat feature)? I'd say the likelihood is non-zero

      • mrweasel 11 hours ago

        It is at least a little suspicious that one week he's hiring like crazy, then next week, right after Sam Altman states that we are in an AI bubble, Zuckerberg turns around and now fears the bubble.

        Maybe he's just gambling that Altman is right, saving his money for now and will be able to pick up AI researcher and developers at a massive discount next year. Meta doesn't have much of a presence in the space market right now, and they have other businesses, so waiting a year or two might not matter.

        • butlike 10 hours ago

          You have to assume they all have each other's phone numbers, right?

      • bitexploder 12 hours ago

        Ehh. You don’t get FB to where it is by being incompetent. Maybe he is not the right leader for today. Maybe. But you have to be right way, way more often than not to create a FB and get it to where it is. To operate from where it started to where it is just isn’t an accident or Dunning-Kruger.

      • thrance 12 hours ago

        The term you're looking for is "billionaire". The amount of serendipity in these guys' lives is truly baffling, and only becomes more apparent the more you dig. It makes sense when you realize their fame is all survivorship bias. Afer all, there must be someone at the tail end of the bell curve.

    • baq 13 hours ago

      By signing too many 250mil contracts.

      • roxolotl 12 hours ago

        Well that’s the incompetent piece. Setting out to write giant historical employment contracts without a plan is not something competent people do. And seemingly it’s not that they over extended a bit either since reports claimed the time availability of the contracts was extremely limited; under 30min in some cases.

        • FrustratedMonky 12 hours ago

          Yes.

          Perhaps it was this: Lets hit the market fast, scoop up all the talent we can before anybody can react, then stop.

          I don't think there is anybody that would expect they would 'continue' offering 250million packages. They would need to stop eventually. They just did it fast, all at once, and now stopped.

      • vasco 12 hours ago

        Or enough of them! >=

    • bluelightning2k 12 hours ago

      Some people actually accepted the contracts before the uno reverse llamabot could activate and block them

    • mlinhares 11 hours ago

      Why are you assuming the investors are competent?

    • blibble 12 hours ago

      don't forget the 115th Rule of Acquisition

      greed IS eternal

    • ludicrousdispla 11 hours ago

      Because you want the ability to low-ball prospective candidates sooner rather than later.

    • zpeti 12 hours ago

      Maybe this time the top posters on HN should stop criticizing one of the top performing founder CEOs of the last 20 years who built an insane business, made many calls that were called stupid at the time (WhatsApp), and many that were actually stupid decisions.

      Like do people here really think making some bad decisions is incompetence?

      If you do, your perfectionism is probably something you need to think about.

      Or please reply to me with your exact perfect predictions of how AI will play out in the next 5, 10, 20 years and then tell us how you would run a trillion dollar company. Oh and please revisit your comment in these timeframes

      • brokencode 12 hours ago

        I think many people just really dislike Zuckerberg as a human being and Meta as a company. Social media has seriously damaged society in many ways.

        It’s not perfectionism, it’s a desire to dunk on what you don’t like whenever the opportunity arises.

        • butlike 10 hours ago

          It's an entertainment tool. Like a television or playstation. Only a fool would think social media is anything more.

          • brokencode 6 hours ago

            Sure, but society is full of fools. Plenty of people say social media is the primary way they get news. Social media platforms are super spreaders of lies and propaganda.

      • rgavuliak 12 hours ago

        I don't think it's about perfect predictions. It's more about going all in on Metaverse and then on AI and backtracking on both. As a manager you need to use your resources wisely, even if they're as big as what Meta has at its disposal.

        The other thing - Peter's principle is that people rise until they hit a level where they can't perform anymore. Zuck is up there as high as you can go, maybe no one is really ready to operate at that level? It seems both him and Elon made a lot of bad decisions lately. It doesn't erase their previous good decisions, but possibly some self-reflection is warranted?

      • piva00 12 hours ago

        > Like do people here really think making some bad decisions is incompetence?

        > If you do, your perfectionism is probably something you need to think about.

        > Or please reply to me with your exact perfect predictions of how AI will play out in the next 5, 10, 20 years and then tell us how you would run a trillion dollar company.

        It's the effect of believing (and being sold) meritocracy, if you are making literal billions of dollars for your work then some will think it should be spotless.

        Not saying I think that way but it's probably what a lot of people consider, being paid that much signals that your work should be absolutely exceptional, big failures just show they are also normal flawed people so perhaps they shouldn't be worth million times more than other normal flawed people.

        • zpeti 12 hours ago

          He’s not “being paid that much”

          He’s earned almost all his money through owning part of a company that millions of shareholders think is worth trillions, and does in fact generate a lot of profits.

          A committee didn’t decide Zuckerberg is paid $30bn.

          And id say his work is pretty exceptional. If it wasn’t then his company wouldn’t be growing. And he’d probably be pressured into resigning as CEO

    • dist-epoch 12 hours ago

      > How do you go from 250mil contracts to freezes in under a month?

      Easy, you finished building up a team. You can only have so many cooks.

      • rco8786 12 hours ago

        That's not really how that works in the corporate/big tech world. It's not as though Meta set out and said "Ok we're going to hire exactly 150 AI engineers and that will be our team and then we'll immediately freeze our recruiting efforts".

      • apwell23 12 hours ago

        how tf do you know when you are "finished"

    • iLoveOncall 11 hours ago

      I'm still waiting for a single proof that there was any contract in the hundreds of millions that was signed.

      • JKCalhoun 10 hours ago

        The damage is already done though. If I worked for Meta and did not get millions, I think I would be pretty irate.

        • iLoveOncall 9 hours ago

          Which is definitely why Sam Altman started this rumor.

    • Lalabadie 12 hours ago

      This is Meta, named after the fact that the Metaverse is undoubtedly what comes next.

    • BobbyTables2 12 hours ago

      So he be also read the recent article on Sam Altman saying it was a bubble?

    • andrepd 12 hours ago

      Yes, people who struck it rich are not miraculously more intelligent or capable. Seems obvious, but many people believe they are.

    • deadbabe 12 hours ago

      Could they get their money back?

  • boshalfoshal 9 hours ago

    Clickbait title and article. There was a large reorg of genai/msl and several other teams, so things have been shuffled around and they likely don't want to hire into the org while this is finalizing.

    A freeze like this is common and basically just signals that they are ready to get to work with the current team they have. The whole point of the AI org is to be a smaller, more focused, and lean org, and they have been making several strategic hires for months at this point. All this says is that zuck thinks the org is in a good spot to start executing.

    From talking with people at and outside of the company, I don't have much reason to believe that this is some kneejerk reaction to some supposed realization that "its all a bubble." I think people are conflating this with whatever Sam Altman said about a bubble.

    • baby 4 hours ago

      This ^ most of this thread is missing the point

  • duxup 8 hours ago

    Previous articles and comments were "Praise Mark for being brave enough to go all in on AI!"

    Now we have this ;)

  • mathverse 6 hours ago

    To me AI is a like phone business. A few companies (Apple,Samsung) will manage to score a homerun and the rest will be destined to offer commoditized products.

    • baby 4 hours ago

      And Meta doesn't want to miss the phone train this time

  • rognjen 5 hours ago

    On the other hand, it's been shown time and again that we should do the opposite of whatever Zuck says.

  • ojr 12 hours ago

    I just did a phone screen with Meta, and the interviewer asked for Euclidean distance between two points; they definitely have some nerds in the building.

    • karmakurtisaani 12 hours ago

      That's like 8th grade math, what am misunderstanding about your comment?

      E: wasn't the only one.

      • ojr 12 hours ago

        K closest points using Euclidean distance and a heap, is not 8th grade math, although any 8th grade math problem can be transformed into a difficult "adult" question. Sums are elementary, asking to find a window of prefix sums that add up to something is still addition, but a little more tricky

    • jaccola 12 hours ago

      People saying it is a high school maths problem! I'd like to see you provide a general method for accurately measuring the distance between two arbitrary points in space...

      • ojr 12 hours ago

        Using a heap in 10 minutes, the Euclidean distance formula was given and had to be used in the answer; maybe they thought that was the question?

      • butlike 10 hours ago

        Compare to the speed of light, c at your two reference frames

    • rogerkirkness 12 hours ago

      They actually need to know so that they can train Llama.

    • A_D_E_P_T 12 hours ago

      I suppose the trick is to have an ipad running GPT-voice-mode off to the side, next to your monitor. Instruct it to answer every question it overhears. This way you'll ace all of the "humiliation ritual" questions.

      • ojr 12 hours ago

        there's a youtube channel made by a meta engineer, he said to memorize the top 75 LeetCode Meta questions and approaches. He doesn't say fluff like "recognize patterns. My interviewer was 3.88/4 GPA masters Comp Sci guy from Penn, I asked for feedback he said always be studying its useful if you want a career...

    • ojr 12 hours ago

      it wasn't just euclidean distance of course, it was this leetcode problem k closest points to origin https://leetcode.com/problems/k-closest-points-to-origin/des..., I thought if I needed a heap I would have to implement it myself didn't know I can use a library

      • nijave 12 hours ago

        I.e. the nearest neighbor problem. Presumably seeing if the candidate gave a naive solution and was able to optimize or find a more ideal solution

        • ojr 12 hours ago

          its not a nearest neighbor problem that is incorrect, they expect candidates to have the heap solution on the first go, you have 10-15 minutes to answer, no time to optimize, cheaters get blacklisted, welcome to the new reality

          • esafak 12 hours ago

            Finding the k points closest to the origin (or any other point) is obviously the k-nearest neighbors problem. What algorithm and data structure you use does not change that.

            https://en.wikipedia.org/wiki/Nearest_neighbor_search

            edit: If you want to use a heap, the general solution is to define an appropriate cost function; e.g., the p-norm distance to a reference point. Use a union type with the distance (for the heap's comparisons) and the point itself.

            • ojr 11 hours ago

              true, I am thinking, Node and neighbors, this is a heap problem, it actually does matter what algorithm you use, I learn that the hard way today, trying to implement quickselect vs using a heap library (I didn't know you could do that) is much easier, don't make the same mistake!

          • butlike 10 hours ago

            Wish it was more how you think than requiring boolean correct/incorrect answer on the whiteboard after 15min.

    • dist-epoch 12 hours ago

      That's basic high school math problem.

      • ojr 10 hours ago

        The foundation, like every LeetCode problem, is a basic high school math problem, when the foundation of the problem is trigonometry, way harder than stack, arrays, linked list, bfs, dfs...

    • andrepd 12 hours ago

      I don't get the joke.

      • ojr 10 hours ago

        no joke, stop pretending like you know the answer to every LeetCode question that utilizes Euclidean distance

  • boringg 9 hours ago

    Nvidia earnings next week. Thats the bell weather -everything else is speculation.

  • noobermin 7 hours ago

    And just three weeks ago I was suggesting a crash might hurt bad when this very meta announced 250 million dollar salary packages.

  • adithyassekhar 12 hours ago

    Does this mean the ai companies will start charging more? I only just started figuring this AI thing out.

    • lm28469 12 hours ago

      They're all bleeding money so yes it's inevitable.

      It's always the same thing, uber, food delivery, escooter, &c. they bait you with cheap trials and stay cheap until the investors money run out, and once you're reliant on them they jack up the prices as high as they can.

    • frexs 12 hours ago

      May the enshittification begin.

    • neuronic 12 hours ago

      Someone needs to finance absurd operational costs if the services are supposed to stick around.

  • noisy_boy 10 hours ago

    Maybe they are trying to signal to the AI talent in general to temper their expectations while simultaneously chasing rockstars with enormous sums.

  • real_marcfawzi 4 hours ago

    all news is being manipulated to short stock and make a ton of money shorting stock, based on perceived bad news

  • scrollop 11 hours ago

    Makes you think whether llama progress is not doing too well and/or perhaps we're entering a plateau for llm architecture development.

    • butlike 10 hours ago

      The article got me thinking that there's some sort of bottle neck that makes scaling astronomical or the value just not really there.

      1. Buy up top talent from other's working in this space

      2. See what they produce over say, 6mo. to a year

      3. Hire a corpus of regular ICs to see what _they_ produce

      4. Open source the model to see if any programmer at all can produce something novel with a pretty robust model.

      Observe that nothing amazing has really come out (besides a pattern-recognizing machine that placates the user to coerce them into using more tokens for more prompts), and potentially call it on hiring for a bubble.

      • aleph_minus_one 10 hours ago

        > Observe that nothing amazing has really come out

        I wouldn't say so. The problem is rather that some actually successful applications of such AI models are not what companies like Meta want to be associated with. Think into directions like AI boyfriend/girlfriend (a very active scene, and common usage of locally hosted LLMs), or roleplaying (in a very broad sense). For such applications, it matters a lot less if in some boundary cases the LLM produces strange results.

        If you want to get an impression of such scenes, google "character.ai" (roleplaying), or for AI boyfriend/girlfriend have a look at https://old.reddit.com/r/MyBoyfriendIsAI/

  • rudedogg 10 hours ago

    Metaverse people: “We’re so back!”

  • nijave 12 hours ago

    Title is a bit misleading. Meta freezes hiring after acquiring and hiring a ton while, somewhere else, Altman says it's a bubble.

    The more obvious reason for a freeze is they just got done acquiring a ton of talent

  • vonneumannstan 9 hours ago

    Did the board realize Zuck was out of his mind or what?

    • dghlsakjg 8 hours ago

      Does it matter?

      Zuckerberg holds 90% of the class B supershares. There isn't much the board can do when the CEO holds most of the shareholder votes.

  • SoftTalker 12 hours ago

    Nothing would give me a nicer feeling of schadenfreude than to see Meta, Google, and these other frothing-at-the-mouth AI hucksters take a bath on their bets.

    • Hammershaft 11 hours ago

      Can we try to not turn HN into this? I come to this forum to find domain experts with interesting commentary, instead of emotionally charged low effort food fights.

      • dwnw 9 hours ago

        your comment somehow feels more emotionally charged and low effort than the original. here, let's continue that...

    • garbawarb 8 hours ago

      Why would that give you a nice feeling?

    • aleph_minus_one 12 hours ago

      Just until some month ago, people on HN were shouting people down who argued that spending big money on building an AI might not be a good idea ...

    • Lionga 12 hours ago

      How did this get pushed of the front page with over 100 points in less then an hour? YC does not like that kind of articles?

      • cyberlimerence 11 hours ago

        #comments >>> #points --> flame war detector

  • siliconc0w 9 hours ago

    It never really made sense for Meta to get into AI, the motivations were always pretty thin and it seemed like they just wanted to ride the wave.

    • dylan604 9 hours ago

      Isn't that what companies are supposed to do by seeing/following/setting trends in a way that increases revenue and profit for the shareholders?

    • Insanity 4 hours ago

      I somewhat disagree here. Meta is a huge company with multiple products. Experimenting with AI and trying to capitalize on what's bound to be a larger user market, is a valid company angle to take.

      It might not pan out, but it's worth trying from a pure business point of view.

    • dghlsakjg 8 hours ago

      Meta's business model is to capture attention - largely with "content" - so they can charge lots of money to sprinkle ads amongst that content.

      I can see a lot of utility for Meta to get deeply involved in the unlimited custom content generating machine. They have a lot of data about what sort of content gets people to spend more time with them. They now have the ability to endlessly create exactly what it is that keeps you most engaged and looking at ads.

      Frankly, content businesses that get their revenue from ads are one of the most easily monetizable ways to use the outputs of AI.

      Yes, it will pollute the internet to the point of making almost all information untrustable, but think of how much money can be extracted along the way!

      • siliconc0w 7 hours ago

        The whole point is novelty/authenticity/scarcity though, if you just have a machine that generates infinite infinitely cute cat videos then people will cease to be interested in cat videos. And its not like they pay content creators anyway.

        It's Spain sinking their own economy by importing tons of silver.

        • dghlsakjg 2 hours ago

          We have services that will serve you effectively infinite cat videos, and neither cat videos or the websites that do that have ceased to be popular.

          It is actually the basis for the sites that people tend to spend most of their time and attention on.

          Facebook, Instagram, Reddit, TikTok all live on the users that only want to see infinite cat videos (substitute cat video for your favorite niche). Already much of the content is AI generated, and boy does it do numbers.

          I am not convinced that novelty, authenticity, or scarcity matter in the business model. If they do, AI has solved novelty, has enough people fooled with authenticity, and scarcity... no one wants their cat video feed to stop.

  • lilerjee 11 hours ago

    Similar to my view of AI, there is a huge bubble in current AI. Current AI is nothing more than a second-hand information processing model, with inherent cognitive biases, lagging behind environmental changes, and other limitations or shortcomings.

  • Waterluvian 6 hours ago

    Is this the stage of the bubble where they burst the bubble by worrying that there’s a bubble?

  • StarterPro 4 hours ago

    Its almost like nobody asked for the dramatic push of ai, and it was all created by billionaires trying to become even richer at the cost of people's health and the environment.

    • snickerbockers 16 minutes ago

      I still have yet to see it do anything useful. I've seen several very impressive "parlor tricks" which a decade ago i thought were impossible (image generation, text-parsing, arguably passing the turing-test) but I still haven't seen anybody use AI in a way that solves a real problem which doesn't already have an existing solution.

      I will say that grok is a very useful research assistant for situations where you understand what you're looking at but you're at an impasse because you don't know what its name is and are therefore unable to look it up, but then it's just an incremental improvement over search-engines rather than a revolutionary new technology.

  • seatac76 12 hours ago

    The money committed to payroll for these supposed top AI is equivalent to a mid size startup’s payroll, no wonder they had to hit pause.

  • hinkley 3 hours ago

    How to make a bubble pop: announce a trillion dollar company has stopped hiring in that area.

  • freeopinion 3 hours ago

    If AI really is a bubble and somehow imploded spectacularly for the rest of this year, universities would continue to spit out AI specialists for years to come. Mr. Z. will keep hiring them into every opening that comes up whether he wants to or not.

    • snickerbockers 24 minutes ago

      Silicon Valley has never seen a true bubble burst, even the legendary dot-com bubble was a minor setback from which the industry was fully recovered in about 5-10 years.

      I have been saying for at least 15 years now that eventually Silly Valley will collapse when all these VCs stop funding dumb startups by the hundreds in search of the elusive "unicorns", but I've been wrong at every turn as it seems that no matter how much money they waste on dumb bullshit the so-called unicorns actually do generate enough revenue to make funding dumb startup ideas a profitable business model....

  • nova22033 12 hours ago

    The team drew criticism from executives this spring after the release of the latest Llama models underperformed expectations.

    interesting

  • wnc3141 7 hours ago

    in a few months.."sorry my whims proved wrong again, so we'll take the healthcare and stability away from I guess 10% of you."

  • saubeidl 10 hours ago

    LLMs are not the way to AGI and it's becoming clearer to even the most fanatic evangelists. It's not without reason GPT-5 was only a minor incremental update. I am convinced we have reached peak LLM.

    There's no way a system of statistical predictions by itself can ever develop anything close to reasoning or intelligence. I think maybe there might be some potential there if we combined LLMs with formal reasoning systems - make the LLM nothing more than a fancy human language <-> formal logic translator, but even then, that translation layer will be inherently unreliable due to the nature of LLMs.

    • butlike 10 hours ago

      Yup. I agree with you.

      We're finally reaching the point where it's cost-prohibitive to sweep this fact under the rug with scaling out data centers and refreshing version numbers to clear contexts.

  • Pavilion2095 9 hours ago

    > Mark Zuckerberg has blocked recruitment of artificial intelligence staff at Meta, slamming the brakes on a multibillion-dollar hiring spree amid fears of an AI bubble.

    > amid fears of an AI bubble

    Who told the telegraph that these two things are related? Is it just another case of wishful thinking?

  • lawlessone 7 hours ago

    I feel like the giant 100 mil /1 billion salaries could have been better spent just hiring a ton of math, computer science, data science graduates and just forming an an AI skunkworks out of them.

    Also throw in a ton of graduates from other fields/sciences, arts, psychology, biology, law , finance, or whatever else you can imagine to help create data and red team their fields.

    Hiring people with creative writing and musical skills to give it more samples of creative writing and song writing, summarization etc

    And people that are good at teaching and breaking complex problems into easier to understand chunks for different age brackets.

    Their userbase is big but it's not the same as ChatGTP's, they won't get the same tasks to learn from users that chatgpt does.

  • akudha 11 hours ago

    Must be nice to do whatever he wants, without worrying about consequences…

  • nashashmi 10 hours ago

    Mark created the bubble. Other investors saw few opportunities for investment. So they put more Money into a few companies.

    What we need is more independent and driven innovation.

    Right right now the greatest obstacle to independent innovation is the massive data bank the bigger companies have.

  • qwertox 6 hours ago

    "Now let's make the others doubt that this is a meaningful investment"

    After phase 1, "the shopping spree".

  • ekianjo 12 hours ago

    > Sam Altman, OpenAI’s chief executive, has compared hype around AI to the dotcom bubble at the turn of the century

    Sam is the main one driving the hype, that's rich...

    • motorest 12 hours ago

      > Sam is the main one driving the hype, that's rich...

      It's also funny that he's been accusing those who accept better job offers as mercenaries. It does sound like the statements try to modulate competition both in the AI race and in acquiring the talent driving it.

    • cedws 12 hours ago

      Now that you mention it, there's been a very sudden tone shift from both Altman and Zuckerberg. What's going on?

      • logicchains 11 hours ago

        GPT-5 was a massive disappointment to people expecting LLMs to accelerate to the singularity. Unless Google comes out with something amazing in the next Gemini, all the people betting on AI firms owning the singularity will be rethinking their bets.

    • dist-epoch 12 hours ago

      Or now that he has the money he wants people to stop investing in the competition.

    • bux93 12 hours ago

      But then, he's purposely comparing it to the .com bubble - that bubble had some underlying merit. He could compare it to NFTs, the metaverse, the South Sea Company. It wouldn't make sense for him to say it's not a bubble when it's patently clear, so he picks his bubble.

    • nijave 12 hours ago

      Facebook, Twitter, and some others made it out of the social media bubble. Some "gig" apps survived the gig bubble. Some crypto apps survived peak crypto hype

      Not everyone has to lose which he's presumably banking on

      • _heimdall 12 hours ago

        Right, he's hoping to be Amazon rather than Pets.com in the sitcom bubble analogy.

  • animitronix 9 hours ago

    Why is he always behind the curve, always?

    • SkyMarshal 3 hours ago

      To be fair it worked for him the first time. And for Apple too multiple times for that matter.

    • shortrounddev2 9 hours ago

      He and his company have not innovated since they created facebook. They bought their success after that

  • ChrisArchitect 10 hours ago
  • EternalFury 11 hours ago

    Is it just me or does it feel like billionaires of that ilk can never go broke no matter how bad their decisions are? The complete shift to the metaverse, the complete shift to LLMs and fat AI glasses, the bullheaded “let’s suck all talents out of the atmosphere” phase and now let’s freeze all hiring. In a handful of years.

    And yet, billionaires will remain billionaires. As if there are no consequences for these guys.

    Meanwhile I feel another bubble burst coming that will hang everyone else high and dry.

    • pas 10 hours ago

      the top100 richest people on the globe can do a lot more stupid stuff and still walk away to a comfortable retirement, whereas the bottom 10-20-.. percent doesn't have this luxury.

      not to mention that these rich guys are playing with the money of even richer companies with waaay too much "free cash flow"

  • morkalork 11 hours ago

    Really feels like it went from "AI is going to destroy everyone's jobs forever" to "whoops bubble" in about 6 weeks.

    • deltarholamda 11 hours ago

      It'll be somewhere in between. A lot of capital will be burned, quite a few marginal jobs will be replaced, and AI will run into the wall of not enough new/good training material because all the future creators will be spoiled by using AI.

    • moduspol 10 hours ago

      Even that came after "AI is going to make itself smarter so fast that it's inevitably going to kill us all and must be regulated" talk ended. Remember when that was the big issue?

      Haven't heard about that in a while.

      • morkalork 10 hours ago

        I've seen a few people convince themselves they were building AGI trying to do that, though it looked more like the psychotic ramblings of someone entering a manic episode committed to github. And so far none of their pet projects have taken over the world yet.

        It's actually kind of reminds me of all those people who snap thinking they've solved P=NP and start spamming their "proofs" everywhere.

    • rchaud 10 hours ago

      Makes sense. Previously the hype was so all-encompassing that CEOs could simply rely on an implicit public perception that it was coming for our jerbs. Once they have to start explicitly saying that line themselves, it's because that perception is fading.

    • iab 11 hours ago

      No worries, we’ll be back at the takeover stage in another 6 weeks

    • fnordlord 11 hours ago

      I feel like it was never a bubble to begin with. It was a hoax.

  • srameshc 12 hours ago

    It could be that, beyond the AI bubble, there may be a broader understanding of economic conditions that Meta likely has. Corporate spending cuts often follow such insights.

  • trumbitta2 10 hours ago

    "Mark Zuckerberg freezes AI hiring after he personally offered 250M to a single person and the budget is now gone."

  • charliebwrites 10 hours ago

    “Man who creates bubble now fears it”

  • smeeger 3 hours ago

    If there is a path to AGI then ROI is going to be enormous literally regardless of how much was invested. hopefully this is another bubble. i would really rather not have my lifes work vaporized by the singularitt

  • tamimio 3 hours ago

    I think I have said it before here (and in real life too) that AI is just another bubble, let alone AGI which is a complete joke, and all I got is angry faces and responses. Tech always had bubbles and early adopters get the biggest slice, and try as much as possible to keep it alive later to maximize that cut. By the time the average person is aware of it and is talking about it, it's over already. Previous tech bubbles: internet, search engines, content makers, smartphones, cybersecurity, blockchain and crypto, and now generative AI. By the way, AI was never new and anyone in the field knows this. ML was already part of some tech before generative AI kicked in.

    Glad I personally never jumped on the hype and still focused on what I think is the big thing, but until I get enough funds to be the first in the market, I will keep it low.

  • xyst 9 hours ago

    Explains why AI companies like Windsurf were hunting for buyers to hold the bag

  • antithesizer 5 hours ago

    itt: confirmation bias

  • mgaunard 12 hours ago

    as an outsider, what I find the most impressive is how long it took for people to realize this was a bubble.

    Has been for a few years now.

    • marcyb5st 12 hours ago

      Note: I was too young to fully understand the dot com bubble, but I still remember a few things.

      The difference I see is that, conversely to websites like pets.com, AI gave the masses something tangible and transformative with the promise it could get even better. Along with these promises, CEOs also hinted at a transformative impact "comparable to Electricity or the internet itself".

      Given the pace of innovation in the last few years I guess a lot of people became firm believers and once you have zealots it takes time for them to change their mind. And these people surely influence the public into thinking that we are not, in fact, in a bubble.

      Additionally, the companies that went bust in early 2000s never had such lofty goals/promises to match their lofty market valuations and in lieu of that current high market valuations/investments are somewhat flying under the radar.

      • _heimdall 12 hours ago

        > The difference I see is that, conversely to websites like pets.com, AI gave the masses something tangible and transformative with the promise it could get even better.

        The promise is being offered, that's for sure. The product will never get there, LLMs by design will simply never be intelligent.

        They seem to have been banking on the assumption that human intelligence truly is nothing more than predicting the next word based on what was just said/thought. That assumption sounds wrong on the face of it and they seem to be proving it wrong with LLMs.

        • marcyb5st 10 hours ago

          I agree with you fully.

          However, even friends/colleagues that like me are in the AI field (I am more into the "ML" side of things) always mention that while it is true that predicting the next token is a poor approximation of intelligence, emergent behaviors can't be discounted. I don't know enough to have an opinion on that, but for sure it keeps people/companies buying GPUs.

          • _heimdall 5 hours ago

            > but for sure it keeps people/companies buying GPUs.

            That's a tricky metric to use as an indicator though. Companies, and more importantly their investors, are pouring mountains of cash in the industry based on the hope of what AI may be in the future rather than what it is today. There are multiple incentives that could drive the market for GPUs, only a portion of those have to do with today's LLM outputs.

      • iphone_elegance 10 hours ago

        You really can't compare "AI" to a single website, it makes no sense

        • marcyb5st 9 hours ago

          It was an example. Pets.com was just the flagship (at least in my mind), but during the dot com bubble there were many many more such sites that had an inflated market value. I mean, if it was just one site that crashed then it wouldn't be called a bubble.

    • jimlawruk 12 hours ago

      From the Big Short: Lawrence Fields: "Actually, no one can see a bubble. That's what makes it a bubble." Michael Burry: "That's dumb, Lawrence. There are always markers."

      • criley2 12 hours ago

        Ah Michael Burry, the man who has predicted 18 of our last 2 bubbles. Classic broken clock being right, and in a way, perfectly validates the "no one can see a bubble" claim!

        If Burry could actually see a bubble/crash, he wouldn't be wrong about them 95%+ of the time... (He actually missed the covid crash as well, which is pretty shocking considering his reputation and claims!)

        Ultimately, hindsight is 20/20 and understanding whether or not "the markers" will lead to a major economic event or not is impossible, just like timing the market and picking stocks. At scale, it's impossible.

        • jaccola 12 hours ago

          I feel 18 out of 2 isn't a good enough statistic to say he is "just right twice a day".

          What was the cost of the 16 missed predictions? Presumably he is up over all!

          Also doesn't even tell us his false positive rate. If, just for example, there were 1 million opportunities for him to call a bubble, and he called 18 and then there were only 2, this makes him look much better at predicting bubbles.

          • criley2 12 hours ago

            If you think that predicting economic crash every single year since 2012 and being wrong (Except for 2020, when he did not predict crash and there was one), is good data, by all means, continue to trust the Boy Who Cried Crash.

        • jimlawruk 9 hours ago

          This sets up the other quote from the movie: Michael Burry: “I may be early but I’m not wrong”. Investor guy: “It’s the same thing! It's the same thing, Mike!”

    • menaerus 12 hours ago

      Confirmation bias much?

  • rvz 13 hours ago

    One of the near top signals of this AI bubble.

    In this hype cycle, you are in late 1999, early 2000.

  • code_for_monkey 11 hours ago

    what happened to the metaverse??? I thought we finally had legs!

    Seriously why does anyone take this company seriously? Its gotta be the worst of the big tech, besides maybe anything Elon touches, and even then...

    • ethbr1 11 hours ago

      1. They've developed and open sourced incredibly useful and cool tech

      2. They have some really smart people working there

      3. They're well run from a business/financial perspective, especially considering their lack of a hardware platform

      4. They've survived multiple paradigm shifts, and generally picked the right bets

      Among other things.

      • jopsen 4 hours ago

        0. Many people use Facebook messenger as primary contact book.

        Even my parents are on Facebook messenger.

        Convincing people to use signal is not easy, and there are lots of people I talk to whose phone number I don't have.

      • code_for_monkey 10 hours ago

        all of those are basically true for every big tech company

        • ethbr1 10 hours ago

          So we agree that Meta is at minimum the equal of every big tech company?

          • code_for_monkey 9 hours ago

            we agree that at maximum its the equal of every tech company

            • ethbr1 9 hours ago

              In what other metrics do other big tech companies exceed it, causing them to be ranked higher?

  • ninetyninenine 9 hours ago

    I don’t think it’s entirely a bubble. Definitely this is revolutionary technology on the scale of going to the moon. It will fundamentally change humanity.

    But while the technology is revolutionary the ideas and capability behind building these things aren’t that complicated.

    Paying a guy millions doesn’t mean shit. So what mark zuckerberg was doing was dumb.

    • lm28469 9 hours ago

      > on the scale of going to the moon

      Of all the examples of things that actually had an impact I would pick this one last... Steam engine, internet, personal computers, radios, GPS, &c. but going to the moon ? The thing we did a few times and stopped doing once we won the ussr vs usa dick contest ?

      • ninetyninenine 8 hours ago

        Impact is irrelevant. We aren’t sure about the impact of AI yet. But the technology is revolutionary. Thus for the example I picked something thats revolutionary but the impact is not as clear.

  • smrtinsert 9 hours ago

    > Pays 100 million for people, wonders if theres a bubble

  • neuronic 12 hours ago

    Good call in this case specifically, but lord this is some kind of directionless leadership despite well thought out concerns over the true economic impact of LLMs and other generative AI tech.

    Useful, amazing tech but only for specific niches and not as generalist application that will end and transform the world as we know it.

    I find it refreshing to browse r/betteroffline these days after 2 years of being bombarded with grifting LinkedIn lunatics everywhere you look.

  • TZubiri 9 hours ago

    The most likely explanation I can think of are drugs.

    Offering 1B dollar salaries and then backtracking, it's like when that addict friend calls you with a super cool idea at 11pm and then 5 days later they regret it.

    Also rejecting a 1B salary? Drugs, it isn't unheard of in Silicon Valley.

  • Lionga 12 hours ago

    How did this get pushed of the front page with over 100 points in less then an hour?

    YC does not like that kind of articles?

  • battxbox 12 hours ago

    THANK YOU

  • cuckerberg432 9 hours ago

    ... the bubble that he created? After he threw $100,000,000,000 into a VR bubble mostly of his making? What a fucking jackass manchild.

  • obayesshelton 12 hours ago

    Hey were only get huge options / stock based on the growth of the business.

    Plus they will of had a vesting schedule

    Besides the point that it was mental but the dude wanted the best and was throwing money at the problem.

  • nabla9 10 hours ago

    BTW: Meta specially denies that the reason is bubble fears and they provide alternate explanation in the article.

    Better title:

    Meta freezes AI hiring due to some basic organizational reasons.

    • 4gotunameagain 10 hours ago

      They would deny bubble fears even if leaked emails proved that it was the only thing they talked about.

      Would anyone seriously take Meta's or any megacorps statements on face value ?

    • skywhopper 10 hours ago

      A hiring freeze is not something you do because you are planning well.