11 comments

  • Reubend 10 hours ago

    Because the website doesn't seem to show any sample size of runs, I assume they ran it once across the suite.

    The models are nondeterministic, and therefore it's pretty normal for different runs to give different results.

    I don't see this as evidence that Opus 4.6 has gotten worse.

    • slurpyb 3 hours ago

      I would love to know what you’re doing in the harness to not feel the total degradation in experience now in comparison to December & January.

    • bsder 5 hours ago

      > The models are nondeterministic, and therefore it's pretty normal for different runs to give different results.

      And how is that an excuse?

      I don't care about how good a model could be. I care about how good a model was on my run.

      Consequently, my opinion on a model is going to be based around its worst performance, not its best.

      As such, this qualifies as strong evidence that Opus 4.6 has gotten worse.

    • dlahoda 6 hours ago

      are models really non deterministic?

      • Rury 5 hours ago

        People are describing the results when they say models are non-deterministic. Give it the same exact input twice, and you'll get two different outputs. Deterministic would mean the same input always gives the same output.

      • loneboat 6 hours ago

        Yes. Look up LLM "temperature" - it's an internal parameter that tweaks how deterministic they behave.

        • csomar 5 hours ago

          The models are deterministic, the inference is not.

          • jmalicki 5 hours ago

            What does that even mean?

            Even then, depending on the specific implementation, associativity of floating point could be an issue between batch sizes, between exactly how KV cache is implemented, etc.

            • csomar 4 hours ago

              That's still an inference time issue. If you have perfect inference with a zero temperature, the models are deterministic. There is no intrinsic randomness in software-only computing.

              • jmalicki 3 hours ago

                Floating point associativity differences can lead to non-determinism with 0 temperature if the order of operations are non-deterministic.

                Anyone with reasonable experience with GPU computation who pays attention knows that even randomness in warp completion times can easy lead to non-determinism due to associativity differences.

                For instance: https://www.twosigma.com/articles/a-workaround-for-non-deter...

                It is very well known that CUDA isn't strongly deterministic due to these factors among practitioners.

                Differences in batch sizes of inference compound these issues.

                Edit: to be more specific, the non-determinism mostly comes from map-reduce style operations, where the map is deterministic, but the order that items are sent to the reduce steps (or how elements are arranged in the tree for a tree reduce) can be non-deterministic.

  • ehtbanton 3 hours ago

    Benchmarks like this one are designed to thoroughly test the model across several iterations. 15% is a MASSIVE discrepancy.

    Come on Anthropic, admit what you're doing already and let us access your best models unhindered, even if it costs us more. At the moment we just all feel short-changed.