The agent harness belongs outside the sandbox

(mendral.com)

24 points | by shad42 an hour ago ago

15 comments

  • Koffiepoeder a few seconds ago

    I am looking for:

    - Easy 1-line CLI agent spawning

    - Automatic context transfer (i. e. a bit like git worktrees)

    - Fully containerised, but remote (a bit like pods)

    - Central, mitm-proxy zero trust authn/authz management (no keys or credentials inside the agents)

    - Multi agent follow-up functionalities

    - Fully self hosted/FOSS

    Basically a very dev-friendly, secure, "kubernetes"-like solution for running remote agents.

    Anyone has an idea of how to achieve this/potential technologies?

  • blcknight 20 minutes ago

    I am not sure anyone knows what a harness is at this point. I've heard 17 different definitions of it at this point. It's almost like a buzzword in search of a problem.

    • aluzzardi 14 minutes ago

      Author here. My definition is: you take an agent, remove the model and you’re left with the harness.

      Tools, memories, sandboxing, steering, etc

    • irishcoffee 5 minutes ago

      I don’t even know what an agent means, let alone harness.

  • saltcured an hour ago

    Sure, the experimental, agentically-developed code should be tested in a sandbox. This sandbox should contain the damage of the code execution when it goes wrong.

    But shouldn't there really be another sandbox where the agentic tool calls execute? This is to contain the damage of the tool execution when it goes wrong.

    And, the agent harness itself should either implement or be contained in a third sandbox, which should contain the damage of the agent. There should be a firewall layer to limit what tool requests the agent can even make. This is to contain the damage of the agent when it formulates inappropriate requests.

    The agent also should not possess credentials, so it cannot leak them to the LLM and allow them to be transformed into other content that might leak out via covert channels.

    • shad42 35 minutes ago

      Yes, it's also because the agent described in the post is doing some operations on the user code (fix CI pipelines, rerun tests, fix them, etc...). So another big reason to use the sandbox is to run things like bash on a user code. you don't want credentials or anything trusted inside that sandbox, including the LLM api key.

    • aluzzardi 26 minutes ago

      Author here. Depending on how it’s designed, the harness itself doesn’t need any sandboxing.

      At the end of the day, it’s a “simple” loop that calls an external API (LLM) and receives requests to execute stuff on its behalf.

      It’s not the agent running bash commands: you (the harness author) are, and you’re in full control of where and how those commands get executed.

      In the article’s case, bash commands are forwarded to a sandbox, nothing ever runs on the harness itself (it physically can’t, local execution is not even implemented in the harness).

  • solidasparagus 19 minutes ago

    Why are two concurrent sessions updating the same memory key with different values? IMO it probably points to a fundamental flaw in how memory is being thought about and built.

    • aluzzardi 5 minutes ago

      Author here. Because of parallelism and non determinism.

      This problem is quite common and not limited to memories. For instance, Claude Code will block write attempts and steer the agent to perform a read first (because the file might have been modified in the meantime by the user or another agent).

      Same principle here: rather than trying to deterministically “merge” concurrent writes, you fail the last write and let the agent read again and try another write

  • trjordan 33 minutes ago

    Nah. Worse is better.

    The reason agents work is because they have access to stuff by default. The whole world is context engineering at this point, and this proposal is to intermediate the context with a bespoke access layer. I put the bare minimum into getting my dev instance into a state where I can develop, because doing stuff (and these days: getting my agent to do stuff) is the goal.

    This makes slightly more sense if you're building a SaaS and trying to get others to give you access to their code, their documents, and the rest so you can run agents against it. But the easiest, most powerful way is to just hook the agents up to the place that's already set up.

    • ossa-ma a few seconds ago

      They are building exactly what you described and this is their architectural solution to ensuring their YOLO agents do not nuke their customers code/documents/databases by sandboxing everything in the workspace — the git checkout the agent is working on, plus whatever's needed to run commands against it (compilers, package managers, etc.).

  • Retr0id an hour ago

    It took me a while to grok why this made any sense, I think the context is that this is for hosting many agents as a service.

    • qezz 40 minutes ago

      Exactly, my understanding is also that they host agents as a service. The actual use case is mentioned in the end of the article, which makes it hard to reason about.

      Anyway. General advice: treat harnesses as any other (third-party) software that you run on your server. Modern harnesses (the ones from big companies, you need to subscribe to) are black boxes. Would you run a random binary you fetched from the internet on your server? Claude code, codex etc. are exactly this.

      • shad42 28 minutes ago

        We don't host 3rd party agents (I don't know if this what you implied). We built an agent that monitors CI pipelines, tests failures, performance and auto opens PR to address issues we find. We host our agent loop on a backend (it's in go), and we call to the sandbox when we run operations involving the user code.

  • 8thcross 29 minutes ago

    we are running a harness outside the sandbox, inside a sandobx.