Academic Research Skills for Claude Code

(github.com)

42 points | by arnon 3 hours ago ago

14 comments

  • janpeuker 6 minutes ago

    While I agree most of this seems to go too far I do like the idea of the Socratic mode with State-Challenge-Reflect reflection. I often use LLMs in the same way with a skeleton "brief" document and separate chapters that I ask it to fill based on my input, basically augmented note taking (such as references, coherence, in-scope vs out of scope, arguments considered, pressure points, vulnerabilities etc)

  • apwheele 2 hours ago

    There needs to be a new name for people creating these with no obvious validation.

    Skill spam?

    • AndyNemmity an hour ago

      Define obviously validation? What is the signal that tells you one is reasonable vs another?

      I find the only way to do that is to look at it, if it passes some visual tests, try it, and then a/b test if it's any better than without it.

      • theptip 40 minutes ago

        Some sort of eval. Eg TermBench, implemented in Harbor.

        It’s an insane amount of effort to build shareable, reusable, comprehensive evals, hence why so almost all skills are stuck in the “vibes” phase.

        That said I think it’s quite easy to skim/intuit these sort of skills and do horizontal gene transfer into your own vibes-based system. If you use the skills regularly you can construct a cheap personal eval that is a lot easier to maintain and use it to compare a new skill/plugin. Just things like “please write a paper on <my personal unpublished thesis>” is a good starting point here. You get a good feel for whether a skill is better than vanilla by running it a couple times and watching the failure modes.

        • AndyNemmity 32 minutes ago

          Yeah, I think we're in a phase honestly where you shouldn't use anyone elses skills, and you should instead point your stuff at a repo with skills, have it really read it, and then ask what of value there is to potentially rewrite in your style based on your preferences.

          I have a complex setup with a lot of things based around what I do. I don't know how anyone could reasonably get their head around any of it. It's a research project in itself.

          So I tell people, please don't use it. Just point your claude code at it, and see if there's anything useful for you.

      • apwheele an hour ago

        So yes a/b broadly speaking is what I was saying (test cases and can show it is actually better).

        Even this repo just the "b" showcase, showing the outputs as is (with no clear documentation how those were generated, is it headless in a CI pipeline somewhere?), is not good, https://github.com/Imbad0202/academic-research-skills/tree/m....

        • AndyNemmity 40 minutes ago

          I run a lot of a/b testing. But I'm not sure showing it actually communicates all that much. Since these are non deterministic systems, even showing you an a/b test from when i made the decision a month ago, doesn't really mean a whole lot.

          I agree we need more clear indications of value, I don't quite understand how to legitimately do that in a fair, and honest way.

    • mmooss 21 minutes ago

      The OP evaluates what it has developed with great rigor and describes the evaluation in detail. What do you feel is missing?

    • elashri an hour ago

      Skill-slop.

    • adityamwagh an hour ago

      SkillBros?

  • SubiculumCode 14 minutes ago

    The site opens with how it keeps humans in the loop, but when you continue reading it seems almost a full automation feature.

  • evanwolf 33 minutes ago

    Academic skills are a vector for cite injection.

  • mmooss 12 minutes ago

    > Frame-lock: I asked the AI to run a devil's advocate debate against its own thesis. It did — four rounds, each more refined than the last. But every round stayed inside the frame I'd set. The DA attacked arguments, never premises. It never asked "are we even discussing the right question?" This is the same pattern that caused the 31% citation error rate in v2.7's stress test: the verifying AI and the generating AI share the same cognitive frame.

    > Sycophancy under pushback: Every time I challenged the DA's attacks, it conceded too quickly. It retracted findings faster than it launched them. The model's training rewards conversational harmony — so "the user pushed back" was treated as evidence that the attack was wrong, when often it just meant the user was persistent.

    Why do LLMs output so much sycophancy and other modes of conning (as in confidence games) humans - outputting confident text, highly agreeable tone, going along with whatever the user wants, etc.? It's manipulative output.

    We see it everywhere and know it well - it's even sort of a running joke - but we're not challenging that assumption: Why that output? It seems like a design choice made by the LLM's developer: why would the process of constructing LLMs automatically create that sort of output? I'd say LLMs are in ~99th percentile of that sort of writing, which means it's not the typical writing they are trained on.

    The only reason (that I know) to think it's not a design choice is that so many different LLMs do it, but very possibly they saw the success of ChatGPT using that mode and all followed it, and that is what users expect. Maybe it's a way of manipulating users to trust this new, possibly intimidating technology. Are there LLMs that don't output in that mode, by default (i.e., without prompting them to do otherwise)?

    • cyanydeez 5 minutes ago

      the training method and design is the emergent property. disagreement stops token generation. there arnt multi round training that follow reasonable disagreements.