We're Learning Backwards: LLMs build intelligence in reverse

(pleasedontcite.me)

3 points | by preyneyv 10 hours ago ago

1 comments

  • preyneyv 10 hours ago

    Wrote this after a lecture on Hays & Efros' scene completion using just lookup on 2.3M images. Made me think about where the "simple model + lots of data" bet actually leads. The article traces that thread through the Scaling Hypothesis to ARC-AGI-3, where frontier LLMs score under 1% on novel interactive tasks. I think LLMs built intelligence in the wrong order and I'm curious where people think I'm wrong.