Because It Doesn't Have To

(blog.computationalcomplexity.org)

17 points | by zdw a day ago ago

5 comments

  • dzink 10 minutes ago

    Children learn by playing because not much is expected of the outcome in play. Improvement happens when you can play. When AI has a play environment to learn with reinforcement. When entrepreneurs are allowed to try and fail and do better. Doctors learn by practicing under supervision, or on corpses, until they can do it for real. No straight line goes up without a jiggle in the beginning.

  • chermi an hour ago

    I like the networking perspective, but the ML perspective is such a loose analogy that it's hard to even judge. I mean, we've known forever softening constraints allows you to reach solutions otherwise unreachable, for one? There's a gulf of difference between succeeding at something deterministic by allowing failure vs. good pattern matching by optimizing over a rough landscape of examples.

    • nh23423fefe 18 minutes ago

      I'm not seeing how describing measures over possibility space as allowing for mistakes.

      Seems like content reverse engineered from title.

  • dataviz1000 an hour ago

    The LLM reasoning models behave strikingly similar to superscalar out-of-order execution processors with decomposition, verification, and error correction steps.

    Moreover, the LLM reasoning models are reliably consistent solving the same task with the same prompt using different variables. This can be demonstrated.

    Not everything has to be deterministic to be useful. Nonetheless, understand how LLM models can be applied and be useful will help a lot of people to be less frustrated and spend less tokens.

  • booleandilemma an hour ago

    Interesting. I could apply this to some people I've worked with. They work so well because they don't have to.