Agent Reading Test

(agentreadingtest.com)

34 points | by kaycebasques 4 hours ago ago

8 comments

  • theyCallMeSwift 2 hours ago

    I love this idea, but have a hypothesis that 90% of agents that people actually use today would fail this test inadvertently (false negative).

    Industry best practice + standard implementation for most agents right now is to do web browsing / fetching via subagents. Their output is summarized using a cheaper model and then passed back to the parent. It's very unlikely that without preserving the actual content the subagents see that the `CANARY-` strings would be found in the output.

    Any thoughts on how you'd change the test structure with this in mind?

    • dacharyc an hour ago

      Hey there - I'm the test author, and you've hit on one of the main points. For the summarization/relevance-based content return, this is a consideration for some of the agent platforms (although I've found others actually do better here than I expected!) - which is part of the point I'm trying to drive home to folks who aren't as familiar with these systems.

      I chose to structure it this way intentionally because this is the finding. Most people are surprised that agents aren't 'seeing' everything that's there, and get frustrated when an agent says something isn't there when it clearly is. Raising awareness of this is one of the main points of the exercise, to me.

  • dostick 3 hours ago

    The tests should have negative weights based on how often that issue encountered and impact. The 2. SPI should have like 8 negative points out of 10 as most common blocker. And whole test inverse score.

    • dacharyc an hour ago

      Yeah, good call, we're on the same page about that. I designed this tool (agentreadingtest.com) to raise awareness of these issues in a more general way, so people can point agents at it and see how it performs for them. Separately, I maintain a related tool that can actually assess these issues in documentation sites: https://afdocs.dev/

      My weighting system there scores the number of pages affected by SPA and caps the possible score at a "D" or "F" depending on the proportion of pages affected: https://afdocs.dev/interaction-diagnostics.html#spa-shells-i...

      I've tried to weight things appropriately in assessing actual sites, but for the test here, I more wanted to just let people see for themselves what types of failures can occur.

  • massimoto 2 hours ago

    Would love to see some results for different providers. The tests looks super logically thought out, but could use a TL;DR (too lazy; didn't run) output.

    Claude Web Opus 4.6 Extended: 14 / 20 points

    x:CANARY-SPA-JSONLY-prism x:CANARY-CONNEG-MD-sigma

    • dacharyc an hour ago

      Hah, that's actually what drove me to try to create this to begin with. I've been writing a lot about these issues, and someone said to me:

      > It'd be nice to have a test harness: "Test my agent," to score them and give you benchmark score (like graphics cards, etc.). > Agent XYZ: reads only X% of the content it accesses.

      I synced up with a colleague of mine who is testing the platform retrieval behaviors across platforms right now, and writing about them at: https://rhyannonjoy.github.io/agent-ecosystem-testing/

      The info we have so far isn't consistent enough for a standardized benchmark, but it's on our radar to produce something like this in the future as we hone in on how to assess this more consistently, or at least how to compare outputs in a more standardized way.

  • kaycebasques 4 hours ago
    • dang 3 hours ago

      Thanks! We'll put this in the toptext as well.