LLM-as-a-Courtroom

(falconer.com)

22 points | by jmtulloss 4 hours ago ago

1 comments

  • aryamanagraw 9 minutes ago

    We kept asking LLMs to rate things on 1-10 scales and getting inconsistent results. Turns out they're much better at arguing positions than assigning numbers— which makes sense given their training data. The courtroom structure (prosecution, defense, jury, judge) gave us adversarial checks we couldn't get from a single prompt. Curious if anyone has experimented with other domain-specific frameworks to scaffold LLM reasoning.