8 comments

  • zhangchen 37 minutes ago

    Has anyone tried implementing something like System M's meta-control switching in practice? Curious how you'd handle the reward signal for deciding when to switch between observation and active exploration without it collapsing into one mode.

  • aanet 5 hours ago

    by Emmanuel Dupoux, Yann LeCun, Jitendra Malik

    "he proposed framework integrates learning from observation (System A) and learning from active behavior (System B) while flexibly switching between these learning modes as a function of internally generated meta-control signals (System M). We discuss how this could be built by taking inspiration on how organisms adapt to real-world, dynamic environments across evolutionary and developmental timescales. "

    • iFire an hour ago

      https://github.com/plastic-labs/honcho has the idea of one sided observations for RAG.

    • dasil003 3 hours ago

      If this was done well in a way that was productive for corporate work, I suspect the AI would engage in Machievelian maneuvering and deception that would make typical sociopathic CEOs look like Mister Rogers in comparison. And I'm not sure our legal and social structures have the capacity to absorb that without very very bad things happening.

      • marsten 2 hours ago

        Agents playing the iterated prisoner's dilemma learn to cooperate. It's usually not a dominant strategy to be entirely sociopathic when other players are involved.

        • ehnto an hour ago

          You don't get that many iterations in the real world though, and if one of your first iterations is particularly bad you don't get any more iterations.

  • beernet 4 hours ago

    The paper's critique of the 'data wall' and language-centrism is spot on. We’ve been treating AI training like an assembly line where the machine is passive, and then we wonder why it fails in non-stationary environments. It’s the ultimate 'padded room' architecture: the model is isolated from reality and relies on human-curated data to even function.

    The proposed System M (Meta-control) is a nice theoretical fix, but the implementation is where the wheels usually come off. Integrating observation (A) and action (B) sounds great until the agent starts hallucinating its own feedback loops. Unless we can move away from this 'outsourced learning' where humans have to fix every domain mismatch, we're just building increasingly expensive parrots. I’m skeptical if 'bilevel optimization' is enough to bridge that gap or if we’re just adding another layer of complexity to a fundamentally limited transformer architecture.

  • jdkee 2 hours ago

    LeCun has been talking about his JEPA models for awhile.

    https://ai.meta.com/blog/yann-lecun-ai-model-i-jepa/