1 comments

  • vorticalbox 23 minutes ago

    This reminds me of https://dnhkng.github.io/posts/rys/

    David looks into the LLM finds the thinking layers and cut duplicates then and put them back to back.

    This increases the LLM scores with basically no over head.

    Very interesting read.