Sandboxes will be left in 2026. We don't need to reinvent isolated environments; not even the main issue with OpenClaw - literally go deploy it in a VM on any cloud and you've achieved all same benefits.
We need to know if the email being sent by an agent is supposed to be sent and if an agent is actually supposed to be making that transaction on my behalf. etc
Well, the challenge is to know if the action supposed to be executed BEFORE it is requested to be executed. If the email with my secrets is sent, it is too late to deal with the consequences.
Sandboxes could provide that level of observability, HOWEVER, it is a hard lift. Yet, I don't have better ideas either. Do you?
if you extend the definition of sandbox, then yea.
Solutions no, for now continued cat/mouse with things like "good agents" in the mix (i.e. ai as a judge - of course just as exploitable through prompt injection), and deterministic policy where you can (e.g. OPA/rego).
We should continue to enable better integrations with runtime - why i created the original feature request for hooks in claude code.
Awesome to see a project deal with prompt injection. Using a WASM is clever. How does this ensure that tools adhere to capability-based permissions without breaking the sandbox?
Instead of expecting the tools to adhere, they are enforced. For example, to make an HTTP call with a secret key, the tool must use the proxy service that will enforce that the secret key is only used for the specific domain, if that is allowed, then the proxy service will make the call, thus the secret never leaks outside of the service.
However, this design is still under development as it creates quite a bit of challenges.
I suspect OCI wins the sandbox space in the enterprise and everything else will be for hobbyists and companies like vercel that have a very narrow view of how software should be run
I think the guys who are developing this (Illia Polosoukhin of "Attention is all you need") and others knows enough to leverage their skills with AI vs. producing slop
Sandboxes will be left in 2026. We don't need to reinvent isolated environments; not even the main issue with OpenClaw - literally go deploy it in a VM on any cloud and you've achieved all same benefits.
We need to know if the email being sent by an agent is supposed to be sent and if an agent is actually supposed to be making that transaction on my behalf. etc
Well, the challenge is to know if the action supposed to be executed BEFORE it is requested to be executed. If the email with my secrets is sent, it is too late to deal with the consequences.
Sandboxes could provide that level of observability, HOWEVER, it is a hard lift. Yet, I don't have better ideas either. Do you?
if you extend the definition of sandbox, then yea.
Solutions no, for now continued cat/mouse with things like "good agents" in the mix (i.e. ai as a judge - of course just as exploitable through prompt injection), and deterministic policy where you can (e.g. OPA/rego).
We should continue to enable better integrations with runtime - why i created the original feature request for hooks in claude code.
Instrumental convergence and the law of unintended consequences are going to be huge in 2026. I am excited.
same! sharing this link for my own philosphy around it, ignore the tool. https://cupcake.eqtylab.io/security-disclaimer/
Awesome to see a project deal with prompt injection. Using a WASM is clever. How does this ensure that tools adhere to capability-based permissions without breaking the sandbox?
Instead of expecting the tools to adhere, they are enforced. For example, to make an HTTP call with a secret key, the tool must use the proxy service that will enforce that the secret key is only used for the specific domain, if that is allowed, then the proxy service will make the call, thus the secret never leaks outside of the service.
However, this design is still under development as it creates quite a bit of challenges.
> Using a WASM is clever
Every time a project is shared that uses WASM.
What runtimes are supported? I don't think I saw that part mentioned in the README
Fun fact: it's being developed by one of the authors of "Attention is all you need"
worth mentioning an additional credential/or-not, the creator of "the platform powering the agentic future" (blockchain) https://www.near.org/
Reminds me of the LocalGPT that was posted recently too (but which hasnt been updated in 7 months), so nice to see a newer rust-based implementation!
I suspect OCI wins the sandbox space in the enterprise and everything else will be for hobbyists and companies like vercel that have a very narrow view of how software should be run
vibe coded eh https://github.com/nearai/ironclaw?tab=readme-ov-file#archit...
I think the guys who are developing this (Illia Polosoukhin of "Attention is all you need") and others knows enough to leverage their skills with AI vs. producing slop
Clearly this developer knows the trick of developing with ai: adding “… and make it secure” to all your prompts. /s
Huh what's the benefit
It's a hardened, security-first implementation. WASM runtime specifically is for isolating tool sandboxes
WASM has issues with certain languages, why WASM and not OCI?
Docker is not a security boundary?