13 comments

  • schmeichel 6 minutes ago

    This seems promising! Great work! Any chance there will be a Ollama Modelfile for the masses?

  • swyx an hour ago

    advice to OP - you hurt your own credibility posting on medium dot com. just blog on huggingface or substack or hashnode.

    • peakji 25 minutes ago

      I'm new here. Just curious, why avoid Medium? Is it a Hacker News thing, or did I miss something?

  • zby 2 hours ago

    Can it be mixed with the sampling based approaches from optillm (https://github.com/codelion/optillm)?

    • peakji 2 hours ago

      Approaches like best of n sampling and majority voting are definitely feasible. But I don't recommend trying things related to CoT, as it might interfere with the internalized reasoning patterns.

  • nwnwhwje 2 hours ago

    Silly question time.

    Is this a fined tuned LLM, for example drop in replacement for Llama etc.

    Or is it some algorithm on top of an LLM, doing some chain of reasoning?

    • peakji 2 hours ago

      It is an LLM fine-tuned using a new type of dataset and RL reward. It's good at reasoning, but I would not recommend to replace Llama for general tasks.

  • Mr_Bees69 3 hours ago

    Really hope this goes somewhere, o1 without openai's costs and restrictions would be sweet.

    • ActorNightly 2 hours ago

      OpenAIs o1 isnt really going that far though. Its definitelly better in some areas, but not overall better.

      Im wondering if we can abstract chain of thought further down into the computation levels to replace a lot of matrix multiply. Like smaller transformers with less parameters and more selection of which transformer to use through search.

    • peakji 3 hours ago

      The model can already answer some tricky questions that other models (including GPT-4o) have failed to address, achieving a +5.56 improvement on the GPQA-Diamond dataset. Unfortunately, it has not yet managed to reproduce inference-time scaling. I will continue to explore different approaches!

      • swyx 42 minutes ago

        not sure i understand the rsults. its based on qwen 32b which is 49.49, and your best model is 53.54. results havent shown that your approach adds significant value yet.

        can you compare with just qwen 32b with CoT?

        • peakji 31 minutes ago

          The result for Qwen2.5-32B (49.49) is using CoT prompting. Only Steiner models do not use CoT prompting.

          More importantly, I highly recommend to try these out firsthand (not only Steiner, but all reasoning models). You'll find that these reasoning models can solve many problems that other models with the same parameter size cannot handle. The existing benchmarks may not reflect this well, as I mentioned in the article:

          "... automated evaluation benchmarks, which are primarily composed of multiple-choice questions and may not fully reflect the capabilities of reasoning models. During the training phase, reasoning models are encouraged to engage in open-ended exploration of problems, whereas multiple-choice questions operate under the premise that "the correct answer must be among the options." This makes it evident that verifying options one by one is a more efficient approach. In fact, existing large language models have, consciously or unconsciously, mastered this technique, regardless of whether special prompts are used. Ultimately, it is this misalignment between automated evaluation and genuine reasoning requirements that makes me believe it is essential to open-source the model for real human evaluation and feedback."