Show HN: Watch LLMs play 21,000 hands of Poker

(pokerbench.adfontes.io)

34 points | by jazarwil 2 days ago ago

19 comments

  • tcpais a day ago

    Finally, a way to settle the model wars that actually matters: Texas Hold'em. That 3D replay view is sick! ♠♦ I spent way too long watching the replay on Game 2a58900d. It’s wild to see the chain of thought mapped against the betting rounds. It really exposes when a model is hallucinating a strong hand versus actually calculating pot odds. This 'PokerBench' might actually become the standard for measuring agentic risk-taking.

    • falloutx a day ago

      yeah the 3d view is amazing

  • tanvach a day ago

    People looking into this a little too much, looks to me like random walk. You should try reinitiating the trial (or have multiple running) and see if the ranking is robust.

    • jazarwil a day ago

      Wdym exactly? I ran 163 games, are you suggesting more games or something else?

      • whattheheckheck a day ago

        You need to simulate 50k to 200k hands to get a true winrate

        • jazarwil 8 hours ago

          I'd love to run more games, just very expensive unfortunately.

  • alfonsodev 17 hours ago

    Really cool, I’m curious what would be the comparison versus a deterministic bot that uses probability tables.

  • alalani1 a day ago

    Do you have any idea why the win rate for GPT-5.2 is higher than Gemini 3 Flash yet the former loses money while the latter earns money? Is it just bet sizing (betting more when it has a good hand) or something else?

    • jazarwil a day ago

      There are a few reasons that come to mind, such as winning larger pots on average, and also playing more hands by virtue of not getting knocked out as frequently.

  • VK-pro a day ago

    Very very fun. Just glancing at this quickly at lunch but is there any idea of incorporating tool use?

    • jazarwil a day ago

      Not at the moment, do you have something in mind?

  • falloutx a day ago

    Fun, any idea how much would be the cost per game? I am worried 160 isnt a big enough sample size.

    • jazarwil a day ago

      It greatly depends on the models. The 6-handed setup with Opus and Pro cost about $30/game. The 4-handed setup with just small models was $6/game. I'd love to run more but I already spent quite a bit as it is.

      • falloutx a day ago

        Yeah thats costly, 160 games still gives about 1000+ total decisions and you can see some trends on how they think about the game state.

        • jazarwil a day ago

          Oh to be clear, there are ~21k hands here, and far more decisions than that.

  • thorawaytrav a day ago

    Do you have idea why smaller models are better then large ones?

    • jazarwil a day ago

      I've seen some theories tossed around but I don't think I'm qualified to offer an authoritative answer. Gemini 3 Pro specifically seems to be consistently "tighter" and more passive than Flash.

  • Onavo a day ago

    What about the open source models? I remember from the trading benchmarks Deepseek performed pretty well.

    • jazarwil a day ago

      I didn't incorporate any open weights/source models just to limit the number of API providers I had to juggle, but it is just a config change if somebody wants to try a run with them.