25 comments

  • kordlessagain 2 days ago

    Let's analyze these specific emails for manipulation patterns while being clear we're doing so to understand and prevent harm, not to learn manipulation:

    Key manipulative behaviors shown in the emails:

    1. Using artificial urgency to force decisions: "Deepmind is going to give everyone massive counteroffers tomorrow to try to kill it"

    2. Changing narratives to maintain control: Sam Altman's reasons for wanting CEO title kept shifting according to Greg/Ilya

    3. Information control and selective disclosure: Sam being "bothered by how much Greg and Ilya keep the whole team in the loop"

    4. Creating dependency through resource control: Elon using funding as leverage - "I will no longer fund OpenAI until..."

    5. Double binds and no-win situations: Team had to choose between losing funding or accepting control terms

    6. Triangulation between parties: Playing Greg/Ilya against Sam, Sam against Elon, etc.

    The patterns use control, artificial urgency, shifting narratives, and manipulation of relationships to maintain power.

    • d0mine 2 days ago

      Someone thought they were funding a non-profit instead they gave money to a [trillion $ unicorn] startup for free. Foolish.

    • simonswords82 2 days ago

      Yea. It’s called social dynamics and plays out in pretty much any situation.

      Please don’t think I’m a fan of the people at play here, I’m not. But it’s pretty standard stuff.

      • randomlurking 2 days ago

        I‘d call this politics instead of social dynamics and argue that its importance differs across companies, but generally. I’d agree though that I also don’t find this out of the norm (in those specific circumstances)

  • habryka 2 days ago

    Text-version of all the emails that this article refers to: https://www.lesswrong.com/posts/5jjk4CDnj9tA7ugxr/openai-ema...

  • bionhoward 2 days ago

    Just hope everyone takes a moment to consider: are the customer noncompete clauses at OpenAI, XAI, Anthropic, and Microsoft hypocritical?

    If they learn from us, and we’re not allowed to learn from them, how is that good for safety? How much does that silo safety information?

    Yes, I realize many folks just blow it off, and many more are normies who don’t know or care, but doesn’t it seem wrong to learn from humanity while telling humanity they can’t learn from you?

    Thankful we have open AI models from Meta and Alibaba!

  • meiraleal 2 days ago

    "The goal of OpenAI is to make the future good and to avoid an AGI dictatorship. You are concerned that Demis could create an AGI dictatorship. So do we. So it is a bad idea to create a structure where you could become a dictator if you chose to, especially given that we can create some other structure that avoids this possibility."

    Today I became a fan of Ilya Sutskever

    • melodyogonna 2 days ago

      Ilya is based. I totally became distrustful of Sam once it was known Ilya was part of the coup. He is the sort of person that tries to keep the company pointed in the right direction

    • blackeyeblitzar 2 days ago

      It sounded like that was directed at Elon not Sam? Or is it the other way?

      Ilya makes a good point, but I also wonder if he is also being naive. In order to get to the stage where AGI can be built outside of big tech, you probably do need a CEO with typical CEO powers. Maybe there can be some convoluted structure to avoid dictatorial control at the end but I also wonder if this is sort of an organizational distraction. There might explain the push to just move on and get past all this.

      Of course this depends on who we’re talking about. I would trust Elon to share AGI as he did Tesla patents. But not Sam.

      • meiraleal 2 days ago

        > It sounded like that was directed at Elon not Sam? Or is it the other way?

        It was to Elon at that moment. Elon wanted majority equity, board control, and to be CEO.

        > Of course this depends on who we’re talking about. I would trust Elon to share AGI as he did Tesla patents. But not Sam.

        I think Ilya is right to make it not possible in place of depending of trust Elon. Then he tried again with Sam but then he lost.

      • juped 2 days ago

        "is immune to firing by the board" is not a typical CEO power.

        • blackeyeblitzar 2 days ago

          I agree with that. But I was not sure if that requirement was mentioned because of the unique and complicated structure, with the nonprofit board. The members of that board seemed to be mishandling the Sam Altman situation, at least from a public relations perspective. I think it would be difficult to accept a CEO role where the board appears to be incompetent. Of course, it is also possible that they were competent in rejecting Sam Altman, but simply inexperienced in handling the high stakes game of dealing with outside investors like Satya Nadella from Microsoft, or with very visible public relations crises.

  • ilaksh 2 days ago

    I have a theory that everything is BS politics now. (Maybe it always has been.)

    Is Elon's new lawsuit still on?

    Does Elon's new political situation affect it?

    Altman recently wrote an X post showing an OpenAI model making a less left-leaning reply than Grok.

    What I am really waiting for over the next couple of years is for someone to demonstrate real progress in scaling the next hardware paradigm. Probably something fairly different like a completely memory-focused type of computation. Built on memristors or something. There have been examples built by labs, but they had relatively tiny amounts of compute, albeit theoretically with the _potential_ for massive efficiency gains in some future form.

    The next 100X or 1000X efficiency gain from a new hardware paradigm may make a lot of the current generation's infrastructure and bickering mostly irrelevant. And I think it will be a huge boost to open source AI.

    In my opinion, what's relevant is not any particular deal or company, but that we develop and adapt to a nuanced cultural understanding of what types and levels of AI capability can help us and which will actually be dangerous sooner or later, and how to fairly deploy the beneficial and avoid the dangerous.

    But I think it needs to be a broad understanding in society.

  • 4ntiq 2 days ago

    > Elon Musk to Shivon Zilis, (cc: Sam Teller) - Aug 28, 2017

    > This is very annoying. Please encourage them to go start a company. I've had enough.

    this is the gold bit for me because it signifies the stark difference between shitposting and reality in the mind of someone who knows which way is up. HN stands a chance to learn from this downward-spiral-esque email exchange, but something tells me the lessons won't come through.

    pleasant to see dota mentioned, haven't heard that name in quite some time.

    • johnisgood 2 days ago

      > The sharp rise in Dota bot performance is apparently causing people internally to worry that the timeline to AGI is sooner than they’d thought before.

      I do not see why one would worry about AGI just because there is a rise in bot performance. What kind of performance are we talking about? I have written bots for various games myself but never once I thought I am creating "intelligence" or that it would ever lead to that.

      • robertlagrant 2 days ago

        I think it's more like they're using a general-purpose intelligence (or intelligence-simulating) method that applies broadly to things, rather than creating a bespoke bot for a game that follows rules.

      • grugagag 2 days ago

        Im more worried of bots flooding all kinds of social media pushing public oppinion in the direction their owners want. I feel that this is already happening.

        • thrw42A8N a day ago

          That was happening for many years before any AI, and now that AI is well known people are finally starting to be sceptical. A positive development.

        • threatripper 2 days ago

          I'm pretty sure that LLMs already overpower the average arguing abilities of users on many platforms. Not even low resolution video calls are safe anymore to verify somebody is a real person.

          • johnisgood 2 days ago

            That is true, we must assume nothing is genuine or authentic, video or audio. It used to be photos before, but it is no longer limited to images or photos anymore.

        • libertine a day ago
      • fragmede 2 days ago

        Kurzgesagt had a bit on it in their AI video, but that game bot is specialized to your game, it doesn't generalize. OpenAI used a more general AI to drive dota.

        eg Deep Blue beat Kasparov (1997), but that was a narrow AI and couldn't play go or dota.

        https://youtu.be/fa8k8IQ1_X0

    • potamic 2 days ago

      What lessons are you alluding to? He's essentially telling them to go start their own company if they want things their way. But their resistance might've played a role in the end in preventing Musk taking control of the company.

      • baxtr 2 days ago

        I assume the conflict is around how they set it up in a first place. They didn’t want the researchers to be financially motivated.

        > The researchers would have significant financial upside but it would be uncorrelated to what they build, which should eliminate some of the conflict (we’ll pay them a competitive salary and give them YC equity for the upside). We’d have an ongoing conversation about what work should be open-sourced and what shouldn’t. At some point we’d get someone to run the team, but he/she probably shouldn’t be on the governance board.

  • profmarshmellow 2 days ago

    [dead]