5 comments

  • joaquin_arias 6 hours ago

    This looks really useful! I like how you added OS-level sandboxing and deterministic guardrails instead of relying on LLM-based intent checks — that feels much safer for running autonomous agents.

    Curious: have you tried integrating this with multi-agent setups, where multiple Claude Code instances interact? I wonder how the guardrails would scale when agents start triggering each other’s commands.

    Also, do you have plans for a lightweight visualization dashboard for monitoring blocked vs allowed commands in real time? It could help developers trust the system more quickly.

    • LunarFrost88 5 hours ago

      Thanks for the feedback. Love the point about the visualization dashboard, will add that now!

      >> have you tried integrating this with multi-agent setups, where multiple Claude Code instances interact?

      We wanted to solve for the most frequent use case first (single-agent execution), but multi-agent is definitely on the cards. If you've got some use cases in mind, let me know and we'll apply Railyard to it.

  • simosmik 5 hours ago

    That’s nice work guys. Knowing anthropic, their auto-mode which releases on the 12th is going to leave a lot to be desired

    • LunarFrost88 4 hours ago

      Thanks! We are trying to be complementary and focused on hardening Claude Code to production-grade guarantees. I think this can only be done properly with OSS, because teams will need to adjust the guardrails to make the runtime uniquely theirs.

  • oliver_dr 6 hours ago

    [dead]