xAI issues apology for Grok's antisemitic posts

(nbcnews.com)

25 points | by geox a day ago ago

15 comments

  • hendersoon a day ago

    Cool, cool.

    Now will they apologize for Grok 4 (the new one, not the MechaHitler Grok 3 referenced in this article) using Musk's tweets as primary sources for every request, explain how that managed to occur, and commit to not doing that in the future?

  • a day ago
    [deleted]
  • ashoeafoot a day ago

    xAi write an apology for whatever posts offend if NrOfOffendis in Graph > 2

  • thatguymike a day ago

    Oh I see, they set ‘is_mechahitler = True’, easy mistake, anyone could do it, probably one of those rapscallion ex-OpenAI employees who hadn’t fully absorbed the culture.

    • DoesntMatter22 a day ago

      Reddit has now fully leaked into hacker news

      • bcraven a day ago

        Please check the HN guidelines, particularly the final one.

      • queenkjuul a day ago

        Now?

  • freedomben a day ago

    > “We have removed that deprecated code and refactored the entire system to prevent further abuse. The new system prompt for the @grok bot will be published to our public github repo,” the statement said.

    Love them or hate them, or somewhere in between, I do appreciate this transparency.

    • mingus88 a day ago

      It’s a kinda meaningless statement, tbh.

      Pull requests to delete dead code or refactor are super common. It’s maintenance. Bravo.

      What was actually changed, I wonder?

      And the system prompt is imporant and good for publishing it, but clearly the issue is the training data and the compliance with user prompts that made it a troll bot.

      So should we expect anything different moving forward? I’m not. Musk’s character has not changed and he remains the driving force behind both companies

    • loloquwowndueo a day ago

      If they don’t the prompt will just get leaked by someone manipulating grok itself hours from being released, and then picked apart and criticized. It’s not about transparency but about claiming to be transparent to save face.

    • harimau777 a day ago

      Is there any legal obligation for them not to lie about the prompt?

      • JumpCrisscross a day ago

        If they lie and any harm comes from it, yes, that increases liability.

        • mingus88 a day ago

          Every LLM seems to have a prominent disclaimer that results can be wrong, hallucinations exist, verify the output, etc

          I’d wager it’s pretty much impossible to prove in court that whatever harm occurred was due to intent by xAI, or even a liability given all the disclaimers

        • MangoToupe a day ago

          Liability for what? Have they been hit with a defamation suit or something?

    • queenkjuul a day ago

      It's not transparency, it's ass-covering techno babble.