46 comments

  • JLvL 15 hours ago

    • Trapit Bansal: pioneered RL on chain of thought and co-creator of o-series models at OpenAl.

    • Shuchao Bi: co-creator of GPT-4o voice mode and o4-mini. Previously led multimodal post-training at OpenAl.

    • Huiwen Chang: co-creator of GPT-4o's image generation, and previously invented MaskIT and Muse text-to-image architectures at Google Research.

    • Ji Lin: helped build o3/o4-mini, GPT-4o, GPT-4.1, GPT-4.5, 4o-imagegen, and Operator reasoning stack.

    • Joel Pobar: inference at Anthropic. Previously at Meta for 11 years on HHVM, Hack, Flow, Redex, performance tooling, and machine learning.

    • Jack Rae: pre-training tech lead for Gemini and reasoning for Gemini 2.5. Led Gopher and Chinchilla early LLM efforts at DeepMind.

    • Hongyu Ren: co-creator of GPT-4o, 4o-mini, o1-mini, o3-mini, o3 and o4-mini. Previously leading a group for post-training at OpenAl.

    • Johan Schalkwyk: former Google Fellow, early contributor to Sesame, and technical lead for Maya.

    • Pei Sun: post-training, coding, and reasoning for Gemini at Google Deepmind. Previously created the last two generations of Waymo's perception models.

    • Jiahui Yu: co-creator of o3, o4-mini, GPT-4.1 and GPT-4o. Previously led the perception team at OpenAl, and co-led multimodal at Gemini.

    • Shengjia Zhao: co-creator of ChatGPT, GPT-4, all mini models, 4.1 and o3. Previously led synthetic data at OpenAl.

    • heyheyhey 11 hours ago

      Anybody have an idea of how much these people are making?

      • mslansn 10 hours ago

        Minimum wage in California is $16.50 per hour, so they are making that at least.

      • sandspar 6 hours ago

        Altman said on a recent podcast that Zuckerberg is poaching OpenAI researchers, giving offers of up to $100,000,000 (one hundred million).

    • paxys 11 hours ago

      Also Alexandr Wang, Nat Friedman.

      • darkwizard42 10 hours ago

        This list is specifically about the poaching from OpenAI / DM I believe

    • smeeger 11 hours ago

      how could these people actively try to open pandoras box and make all humans obsolete? if we survive this i imagine there will be something like the Nuremberg trials for these people who traded in everyones safety and wellbeing for money. and i hope the results will be the same.

  • pyman 13 hours ago

    Mark Zuckerberg hiring top AI researchers worries me more than Iran hiring nuclear scientists.

    • smeeger 11 hours ago

      better than sam altman having them

      • peanuty1 3 hours ago

        Sam Altman is leading OpenAI out of the goodness of his heart. He told Congress he has zero equity or financial stake in OpenAI.

    • quantified 11 hours ago

      With luck, they'll vaporize billions of dollars on nothing of consequence.

      If they come up with anything of consequence, we'll have an incredibly higher level of Facebook monitoring of our lives in all scopes. Also such a level of AI crap (info/disinfo in politics, crime, arts, etc.) that ironically in-person exchanges will be valued more highly than today. When everything you see on pixels is suspect, only the tangible can be trusted.

      • smeeger 11 hours ago

        do you remember the chorus of people on HN two years ago who said that the next AI winter was already upon us?

        • paxys 11 hours ago

          Were they wrong? It's pretty undeniable that every model release since GPT-4 has been less impactful than the last.

          • smeeger 10 hours ago

            im pretty sure that when the top companies are poaching leaders in AI for some of the largest payouts in history… we are not in an AI winter. so they were completely wrong. im guessing you are one of them.

            • paxys 8 hours ago

              Zuck has spent more on Metaverse than AI. A multi-trillion dollar company throwing a few billion at a problem means nothing. Show me the results, not the hype.

            • lispisok 9 hours ago

              Bubbles are always the craziest right before they pop

              • quantified 6 hours ago

                Dot-com bubble: popped, and if you don't have a .com presence you have a .net, .org, or similar Blockchain bubble: popped? Looking good, always profitable to bet on crime and stupidity. Real estate bubble? Popped, but what do think about the asset price of housing now?

                Moore's Law wasn't a law, it was a reflection of investment which reach dizzying heights in its heyday. I think there's a ton of over-hype now but some stuff will come out of it.

  • jxjnskkzxxhx 13 hours ago

    Is mark Zuckerberg systematically behind the curve on every hype?

    • JumpCrisscross 12 hours ago

      > Is mark Zuckerberg systematically behind the curve on every hype?

      Trend following with chutzpah, particulalry through acquisitions, has been a winning strategy for Zuckerberg and his shareholders.

    • pyman 13 hours ago

      He's just trying to figure out how to monetise your WhatsApp messages

      • 4ndrewl 12 hours ago

        In the "metaverse"

    • bamboozled 12 hours ago

      This in includes fashion and hairstyles it seems...

  • smeeger 11 hours ago

    the idea of mark zuckerberg being at the helm of digital super-intelligence sickens me.

    • paxys 8 hours ago

      He spent $50B to be at the helm of the "Metaverse". Don't worry, we are safe.

    • peanuty1 3 hours ago

      Better than Samuel Altman being at the helm.

    • ajkjk 11 hours ago

      the cringiest possible future

  • goatlover 13 hours ago

    Will the Superintelligence finally make the Metaverse profitable and popular?

  • hooloovoo_zoo 13 hours ago

    Poor Sam Altman, 300B worth of trade secrets bought out from under him for a paltry few hundred million.

    • JumpCrisscross 12 hours ago

      > Poor Sam Altman, 300B worth of trade secrets bought out from under him for a paltry few hundred million

      Sorry, you don't lose people when you treat them well. Add to that Altman's penchant for organisational dysfunction and the (in part resulting) illiquidity of OpenAI's employees' equity-not-equity and this makes a lot of sense. Broadly, it's good for the American AI ecosystem for this competition for talent to exist.

      • hn_throwaway_99 12 hours ago

        In retrospect, I wonder if the original ethos of the non-profit structure of OpenAI was a scam from the get go, or just woefully naive. And to emphasize, I'm not talking just about Altman.

        That is, when you create this cutting edge, powerful tech, it turns out that people are willing to pay gobs of money for it. So if somehow OpenAI had managed to stay as a non-profit (let's pretend training didn't cost a bajillion dollars), they still would have lost all of their top engineers to deeper pockets if they didn't pursue an aggressive monetization strategy.

        That's why I want to gag a little when I hear all this flowery language about how AI will cure all these diseases and be a huge boon to humanity. Let's get real - people are so hyped about this because they believe it will make them rich. And it most likely will, and to be clear, I don't blame them. The only thing I blame folks for is trying to wrap "I'd like to get rich" goals in moralistic BS.

        • JumpCrisscross 12 hours ago

          > wonder if the original ethos of the non-profit structure of OpenAI was a scam from the get go, or just woefully naive

          Based on behaviour, it appears they didn't think they'd do anything impactful. When OpenAI accidentally created something important Altman immediately (a) actually got involved to (b) reverse course.

          > if somehow OpenAI had managed to stay as a non-profit (let's pretend training didn't cost a bajillion dollars), they still would have lost all of their top engineers to deeper pockets if they didn't pursue an aggressive monetization strategy

          I'm not so sure. OpenAI would have held a unique position as both first mover and moral arbiter. That's a powerful place to be, albeit not a position Silicon Valley is comfortable or competent in.

          I'm also not sure pursuing monetisation requires a for-profit structure. That's more a function of the cost of training, though again, a licensing partnership with, I don't know, Microsoft, would alleviate that pressure without requiring giving up control.

        • meepmorp 11 hours ago

          It wasn't exactly a scam, it's just nobody thought it'd be worth real money that fast, so the transition from noble venture to cash grab happened faster than expected.

        • s1artibartfast 12 hours ago

          Getting rich going good is better than just getting rich. People like both.

          Which part are you skeptical about? that people also like to do good, or that AI can do good?

  • weird_trousers 14 hours ago

    So much wasted money it makes me sick…

    There are so much money needed to solve another problems, especially for health.

    I don’t blame the new comers, but Zuckerberg.

    • linotype 14 hours ago

      Better on ML than the next VR vaporware.

    • dekhn 14 hours ago

      zuck funds health research (a lot, and very ML focused) already

      • xvector 12 hours ago

        Wild how HN is flagging this objectively correct comment into the ground because "zuck bad!!1"

        • dekhn 12 hours ago

          I really do wish there was a way to downvote "because I don't like what the person is saying, even if it's true"

    • twoodfin 13 hours ago

      This stuff is ridiculously important for healthcare: It’s a demographic fact that both the US and the world at large are simply not training enough doctors and nurses to provide today’s standard of care at current staffing levels as the population ages.

      We need massive productivity boosts in medicine just as fast as we can get them.

      • hn_throwaway_99 12 hours ago

        I sincerely doubt this understaffing of medical professionals is a technology problem, and I believe it much more likely to be an economic structural problem. And overall, I think that powerful generative AI will make these economic structural problems much worse.

        • paxys 9 hours ago

          It's a gatekeeping problem. Doctors don't want more doctors because it dilutes their own value, so medical school and residency spots are kept artificially limited.

      • trainerxr50 11 hours ago

        It doesn't take super intelligence to give my elderly father a bath or wipe his ass.

        I think the main problem is we would almost need an economic depression so that at the margin there were for less alternative jobs available than giving my father a bath.

        Then also consider that say we do have super-intelligence that adds a few years to his life because of better diagnostics and treatment of death. It actually makes the day to day care problem worse in the aggregate.

        We are headed towards this boomer long term care disaster and there is nothing that is going to avert it. Boomers I talk to are completely in denial of this problem too. They are expecting the long term care situation to look like what their parents had. I just try to convince every boomer I know that they have to do everything they can do physically now to better themselves to stay out of long term care as long as possible.

    • cheevly 12 hours ago

      Do you realize how much health-related research Zuckerburg’s foundation does? There was literally a post on here last week about it, geez.

    • xvector 12 hours ago

      Superintelligence or even just AGI short circuits all our problems.

      • jolt42 6 hours ago

        Just another tool that can be used for good or bad.

    • wetpaws 14 hours ago

      [dead]