5 comments

  • giancarlostoro an hour ago

    Interesting, I built a ticketing system similar to Beads which has yielded more predictable results with Claude and other models, and I'm currently building a custom harness, I'm able to use offline models though my GPU ram bandwidth is much lower, but I'm also planning on doing something similar to what you've built, namely the editing tools and what not, I hate how long it takes for Claude to look for files, it feels wasteful. I'm still astounded that everyone else has figured out ways to speed up harnesses, but Claude Code is still slow like a slug. I don't even care if I am waiting on the LLM in terms of slowness, but running local tools slowly bothers the living crap out of me, stop using grep, RIPGREP IS FASTER!

    In any case, I'll have to check out Statewright after work ;)

  • password4321 43 minutes ago

    Does it make sense to ship an MCP code mode API? I'm surprised you're recommending MCP as-is when concerned about context usage optimization. I don't have a lot of hands-on experience either way yet so I'm curious what's best and/or most popular... I understand MCP is less effort and still affordable at VC-subsidised prices.

    • azurewraith 36 minutes ago

      for the integration piece that ties into Claude Code and other places where AI is used most frequently? yes I think it does... we're not fighting context in Opus/Haiku as much as we are in smaller models and we're only adding about 6 tools here which is a smaller footprint than other MCP exposures. Smaller models have a more direct/tight interface that doesn't bloat the tool space in my experimentation (using the core directly)

  • davidkpiano 41 minutes ago

    Pretty cool. Looks like stately.ai but catered towards agentic state machine workflows. Really interesting!

    • azurewraith 25 minutes ago

      Stately is pretty neat, I hadn't come across it yet... kind of like a state machine langflow or Node-RED.

      I see constant posts on Reddit/HN about the ways that AI is amazing and at the same time is fudging it (literally). Nobody can make reliability guarantees on something that's non-deterministic and non-idempotent. Nobody's AI workflow suite of tools can claim this. Prompting gets you closer to the mark but still non-deterministic. Breaking down the problem into chunks with valid transition criterion so that even tiny models can step through them I believe gets us closer to where we want to be semantically