2 comments

  • aurareturn 10 hours ago

    He's probably right. It doesn't mean that Anthropic and OpenAI and other LLM labs are doing the wrong thing.

    LLMs are still improving, seemingly exponentially in task capabilities. OpenAI and Anthropic can generate a huge amounts of revenue, use that cash to buy more compute, and train an AI that does experiments and learns from the real world. When LLMs stop improving, I think we'll move on to something else.

    Right now, Sutton and Yann LeCun are both anti-LLMs. They want to skip straight to "world AIs" which are likely not compute feasible right now given that the world contains a infinitely more data than digital data.

    In other words, I think the AI that Sutton and LeCun want to create will be created by current LLM labs.

    • __patchbit__ 8 hours ago

      Allocate $2 billion on study of the proton ballistic channel inside neuron microtubule and the state of water, molecular lattice configuration spaces at that scale intending to rebase chip substrate from silicon to carbon.