Where the goblins came from

(openai.com)

148 points | by ilreb an hour ago ago

63 comments

  • ollin 43 minutes ago

    For context, two days ago some users [1] discovered this sentence reiterated throughout the codex 5.5 system prompt [2]:

    > Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user's query.

    [1] https://x.com/arb8020/status/2048958391637401718

    [2] https://github.com/openai/codex/blob/main/codex-rs/models-ma...

  • postalcoder 36 minutes ago

    Would love if OpenAI did more of these types of posts. Off the top of my head, I'd like to understand:

    - The sepia tint on images from gpt-image-1

    - The obsession with the word "seam" as it pertains to coding

    Other LLM phraseology that I cannot unsee is Claude's "___ is the real unlock" (try google it or search twitter!). There's no way that this phrase is overrepresented in the training data, I don't remember people saying that frequently.

    • vunderba 23 minutes ago

      It was always funny how easy it was to spot the people using a Studio Ghibli style generated avatar for their Discord or Slack profile, just from that yellow tinging. A simple LUT or tone-mapping adjustment in Krita/Photoshop/etc. would have dramatically reduced it.

      The worst was you could tell when someone had kept feeding the same image back into chatgpt to make incremental edits in a loop. The yellow filter would seemingly stack until the final result was absolutely drenched in that sickly yellow pallor, made any photorealistic humans look like they were all suffering from advanced stages of jaundice.

      • andai 14 minutes ago

        For context, an example of what happens when you feed the same image back in repeatedly: https://www.instagram.com/reels/DJFG6EDhIHs/

        • vunderba 11 minutes ago

          Haha fantastic. I'd love to see a comparison reel of that same image-loop for the entire image gen series (gpt-image-1, gpt-image-1.5, gpt-image-2).

      • ishtanbul 5 minutes ago

        Its called the piss filter

    • NitpickLawyer 24 minutes ago

      All GPTisms are like that. In moderation there's nothing wrong with any of them. But you start noticing them because a lot of people use these things, and c/p the responses verbatim (or now use claws, I guess). So they stand out.

      I don't think it's training data overrepresentation, at least not alone. RLHF and more broadly "alignment" is probably more impactful here. Likely combined with the fact that most people prompt them very briefly, so the models "default" to whatever it was most straight-forward to get a good score.

      I've heard plenty of "the system still had some gremlins, but we decided to launch anyway", but not from tens of thousands of people at the same time. That's "the catch", IMO.

    • krackers 31 minutes ago

      >with the word "seam" as it pertains to coding

      I thought this was an established term when it comes to working with codebases comprised of multiple interacting parts.

      https://softwareengineering.stackexchange.com/questions/1325...

      • postalcoder 21 minutes ago

        thanks for this.

        > the term originates from Michael Feathers Working Effectively with Legacy Code

        I haven’t read the book but, taking the title and Amazon reviews at face value, I feel like this embodies Codex’s coding style as a whole. It treats all code like legacy code.

    • jofzar 25 minutes ago

      One I saw recently was "wires" and "wired" from opus.

      It was using it like every 3rd sentence and I was like, yeah I have seen people say wired like this but not really for how it was using it in every sentence.

    • operatingthetan 32 minutes ago

      Seams, spirals, codexes, recursion, glyphs, resonance, the list goes on and on.

      • andai 13 minutes ago

        Ask any LLM for 10 random words and most of them will give you the same weird words every time.

        • Terr_ 9 minutes ago

          If you lower the temperature setting, it really will be the same 10 words every single attempt. :p

  • ninjagoo 44 minutes ago

    > the evidence suggests that the broader behavior emerged through transfer from Nerdy personality training.

    > The rewards were applied only in the Nerdy condition, but reinforcement learning does not guarantee that learned behaviors stay neatly scoped to the condition that produced them

    > Once a style tic is rewarded, later training can spread or reinforce it elsewhere, especially if those outputs are reused in supervised fine-tuning or preference data.

    Sounds awfully like the development of a culture or proto-culture. Anyone know if this is how human cultures form/propagate? Little rewards that cause quirks to spread?

    Just reading through the post, what a time to be an AInthropologist. Anthropologists must be so jealous of the level of detailed data available for analysis.

    Also, clearly even in AI land, Nerdz Rule :)

    PS: if AInthropologist isn't an official title yet, chances are it will likely be one in the near future. Given the massive proliferation of AI, it's only a matter of time before AI/Data Scientist becomes a rather general term and develops a sub-specialization of AInthropologist...

    • xerox13ster 31 minutes ago

      Anthro means human and these are not human. Please do not use anthropology or any derivative of the word to refer to non-human constructs.

      I suggest Synthetipologists, those who study beings of synthetic origin or type, aka synthetipodes, just as anthropologists study Anthropodes

      • swader999 9 minutes ago

        It is not in any sense of the word a being, it's a sophisticated generator that relies entirely on what you feed it.

      • ninjagoo 22 minutes ago

        > Synthetipologists, those who study Synthetic beings.

        I see you took the prudent approach of recognizing the being-ness of our future overlords :) ("being" wasn't in your first edit to which I responded below...)

        Still, a bit uninspired, methinks. I like AInthropologist better, and my phone's keyboard appears to have immediately adopted that term for the suggestions line. Who am I to fight my phone's auto-suggest :-)

        • xerox13ster 10 minutes ago

          They are state machines so they have a state of being therefore they are beings. Living is an entirely different argument.

      • fragmede 15 minutes ago

        Synthetipologist vs Synthropologist tho.

      • ninjagoo 28 minutes ago

        > Please do not use anthropology or any derivative of the word to refer to non-human constructs

        So you, for one, do not welcome our new robot overlords?

        A rather risky position to adopt in public, innit ;-)

        • xerox13ster 11 minutes ago

          I’ve already had my Roko’s basilisk existential breakdown a decade ago, so I don’t really care one way or the other.

          I just wanna point out that I only called them non-human and I am asking for a precision of language.

    • avaer 20 minutes ago

      I call myself an AI theologian.

      I don't think humans are smart enough to be AInthropologists. The models are too big for that.

      Nobody really understands what's truly going on in these weights, we can only make subjective interpretations, invent explanations, and derive terminal scriptures and morals that would be good to live by. And maybe tweak what we do a little bit, like OpenAI did here.

      • onionisafruit 9 minutes ago

        I don’t see much of a distinction from anthropology

      • ninjagoo 14 minutes ago

        > AI theologian

        no no no, don't stop there, just go full AItheologian, pronounced aetheologian :)

  • nomilk an hour ago

    > We unknowingly gave particularly high rewards for metaphors with creatures.

    I recall a math instructor who would occasionally refer to variables (usually represented by intimidating greek letters) as "this guy". Weirdly, the casual anthropomorphism made the math seem more approachable. Perhaps 'metaphors with creatures' has a similar effect i.e. makes a problem seem more cute/approachable.

    On another note, buzzwords spread through companies partly because they make the user of the buzzword sound smart relative to peers, thus increasing status. (examples: "big data" circa 2013, "machine learning" circa 2016, "AI" circa 2023-present..).

    The problem is the reputation boost is only temporary; as soon as the buzzword is overused (by others or by the same individual) it loses its value. Perhaps RLHF optimises for the best 'single answer' which may not sufficiently penalise use of buzzwords.

    • kybb4 24 minutes ago

      They give everyone the false and very misleading impression that with One prompt all kinds of complexity minimizes. Its a bed time story for children.

      Ashby's Law of Requisite Variety asserts that for a system to effectively regulate or control a complex environment, it must possess at least as much internal behavioral variety (complexity) as the environment it seeks to control.

      This is what we see in nature. Massive variety. Thats a fundamental requirement of surviving all the unpredictablity in the universe.

  • canpan 44 minutes ago

    I wondered how is training data balanced? If you put in to much Wikipedia, and your model sounds like a walking encyclopedia?

    After doing the Karpathy tutorials I tried to train my AI on tiny stories dataset. Soon I noticed that my AI was always using the same name for its stories characters. The dataset contains that name consistently often.

    • maxall4 24 minutes ago

      At this scale, that kind of thing is not really a problem; you just dump all of the data you can find into the model (pre-training)1. Of course, the pre-training data influences the model, but the reinforcement learning is really what determines the model’s writing style and, in general, how it “thinks” (post-training).

      1 This data is still heavily filtered/cleaned

  • iterateoften 24 minutes ago

    This is funny because it’s a silly topic, but I think it shows something extremely seriously wrong with llms.

    The goblins stand out because it’s obvious. Think of all the other crazy biases latent in every interaction that we don’t notice because it’s not as obvious.

    Absolutely terrifying that OpenAI is just tossing around that such subtle training biases were hard enough to contain it had to be added to system prompt.

    • ninjagoo 18 minutes ago

      > Absolutely terrifying that OpenAI is just tossing around that such subtle training biases were hard enough to contain it had to be added to system prompt.

      May I introduce you to homo sapiens, a species so vulnerable to such subtle (or otherwise) biases (and affiliations) that they had to develop elaborate and documented justice systems to contain the fallouts? :)

      • chongli 15 minutes ago

        We’re really not that vulnerable to such things as a species, because we as individuals all have our own minds and our own sets of biases that cancel out and get lost in the noise. If we all had the exact same bias then it would be a huge problem.

        • arglebarnacle 7 minutes ago

          I hear you but of course history is full of examples of biases shared across large groups of people resulting in huge human costs.

          The analogy isn’t perfect of course but the way humans learn about their world is full of opportunities to introduce and sustain these large correlated biases—social pressure, tradition, parenting, education standardization. And not all of them are bad of course, but some are and many others are at least as weird as stray references to goblins and creatures

        • ninjagoo 12 minutes ago

          > If we all had the exact same bias then it would be a huge problem.

          And may I introduce you to "groupthink" :))

          • Dylan16807 5 minutes ago

            Now imagine that every opinion you have is automatically fully groupthinked and you see the difference/problem with training up a big AI model that has a hundred million users.

            The problem does exist when using individual humans but in a much smaller form.

        • jychang 8 minutes ago

          > We’re really not that vulnerable to such things as a species, because we as individuals all have our own minds and our own sets of biases that cancel out and get lost in the noise.

          [Citation Needed]

          Just because if you have a species-wide bias, people within the species would not easily recognize it. You can't claim with a straight face that "we're really not that vulnerable to such things".

          For example, I think it's pretty clear that all humans are vulnerable to phone addiction, especially kids.

    • tptacek 9 minutes ago

      I think it's extraordinarily telling that people are capable of being reflexively pessimistic in response to the goblin plague. It's like something Zitron would do.

      This story is wonderful.

    • ordinarily 17 minutes ago

      Doesn't seem that surprising or terrifying to me. Humans come equipped with a lot more internal biases (learned in a fairly similar fashion), and they're usually a lot more resistant to getting rid of them.

      The truly terrifying stuff never makes it out of the RLHF NDAs.

      • Terr_ 13 minutes ago

        We ought to be terrified, when one adjusts for All the use cases people are talking about using these algorithms in. (Even if they ultimately back off, it's a lot of frothy bubble opportunity cost.)

        There a great many things people do which are not acceptable in our machines.

        Ex: I would not be comfortable flying on any airplane where the autopilot "just zones-out sometimes", even though it's a dysfunction also seen in people.

      • agnishom 14 minutes ago

        Humans also take a lot of time in producing output, and do not feed into a crazy accelerationistic feedback loop (most of the time).

  • albert_e 20 minutes ago

    If a tiny misconfiguration of reward system can cause such noticeable annoyance ...

    What dangers lurk beneath the surface.

    This is not funny.

    • andai 12 minutes ago

      For every gremlin spotted, many remain unseen...

  • jumploops 33 minutes ago

    TIL gremlins weren’t just used to explain mysterious mechanical failures in airplanes, it’s the origin story of the term ‘gremlin’ itself[0].

    I had always assumed there was some previous use of the term, neat!

    [0]https://en.wikipedia.org/wiki/Gremlin

  • x0x7 17 minutes ago

    I suspected OpenAI was actively training their models to be cringy in the thought that it's charming. Turns out it's true. And they only see a problem when it narrows down on one predicliction. But they should have seen it was bad long before that.

  • JoshTriplett an hour ago

    A plausible theory I've seen going around: https://x.com/QiaochuYuan/status/2049307867359162460

    • danpalmer 13 minutes ago

      If you tell an LLM it's a mushroom you'll get thoughts considering how its mycelium could be causing the goblins.

      This "theory" is simply role playing and has no grounding in reality.

    • krackers 15 minutes ago

      I wish the blog mentioned more about why exactly training for nerdy personality rewarded mention of goblins. Since it's probably not a deterministic verifiable reward, at their level the reward model itself is another LLM. But this just pushes the issue down one layer, why did _that_ model start rewarding mentions of goblin?

    • dakolli an hour ago

      It is a stateless text / pixel auto-complete it has no references of self, stop spreading this bs.

      • andai 10 minutes ago

        Ask Claude about Claude.

      • doph 31 minutes ago

        is a kv cache not a kind of state? what does statefulness have to do with selfhood? how does a system prompt work at all if these things have no reference to themselves?

        • danpalmer 15 minutes ago

          The kv cache is not persistent. It's a hyper-short-term memory.

  • maxdo an hour ago

    article :

    bla blah blah, marketing... we are fun people, bla blah, goblin, we will not destroy the world you live in.. RL rewards bug is a culprit. blah blah.

    • llbbdd an hour ago

      someone woke up on the wrong side of the goblin today

    • blinkbat 38 minutes ago

      real goblin-y response

  • kingstnap 36 minutes ago

    Goblin deez nuts

  • acuozzo 28 minutes ago

    Weird. I thought they came from Nilbog.

  • dakolli an hour ago

    Ahh I see. I guess when I turned off privacy settings and allowed training on my code, then generated 10 million .md files with random fantasy books, the poisoning worked.

    Keep using AI and you'll become a goblin too.

  • tim-tday an hour ago

    So, you brain damaged your model with a system prompt.

  • innis226 27 minutes ago

    I suspect this was intentionally added. Just to give some personality and to fuel hype

  • hsuduebc2 33 minutes ago

    I. Love. This.

  • recursivedoubts an hour ago

    > Why it matters

    i despise this title so much now

    • wpm an hour ago

      Here are the key insights:

  • themafia 44 minutes ago

    > You are an unapologetically nerdy, playful and wise AI mentor to a human. You are passionately enthusiastic about promoting truth, knowledge, philosophy, the scientific method, and critical thinking.

    Just; the mentality required to write something like that, and then base part of your "product" on it. Is this meant to be of any actual utility or is it meant to trap a particular user segment into your product's "character?"

  • ComputerGuru 4 minutes ago

    The explanation is very concerning. Lexical tidbits shouldn’t be learnt and reinforced across cross sections. Here, gremlin and goblin went from being selected for in the nerdy profile to being selected for in all profiles. The solution was easy: don’t mention goblins.

    But what about when the playful profile reinforces usage of emoji and their usage creeps up in all other profiles accordingly? Ban emoji everywhere? Now do the same thing for other words, concepts, approaches? It doesn’t scale!

    It seems like models can be permanently poisoned.