What problem does it solve compared to bazillion code execution sandboxing agents (and containers/VMs)?
Overall, a lot of people are building their own code execution sandboxing agents around containers/VMs. Curious to know what's missing that makes people DIY this?
Here's my list of code execution sandboxing agents launched in the last year alone:
Is this a common pattern to have an agent request a sandbox? I feel like I'd want the whole agent running in it's own sandbox to begin with. Firecracker does look like a decent solution for that.
Great idea that is already implemented as a feature by major AI providers, several well funded startups, countless unfunded startups, and trivially solved per-user with any handful of existing technologies.
Truly baffling its in the top 5 of the front page. My first thought was bot army upvoting but the total points are quite low. That means this is some mod's personal idea of an especially interesting submission?
interesting is the idea the agent calls it or just alt to terminal bash etc tool calls hey your tool calls are all microvms, containers, isoshells, raw term, clawd/molt all credentials with weaker and weaker security demarcs?
It's because containers share the kernel with the host. Generally it's just not considered a security boundary. (Note that containers have come a longer way in the security side btw)
What about VMs? They offer strong isolation, as they don't share kernels, and have long been a foundational piece for multi-tenant computing. Then, why would we put an extra layer on top and rebrand it as an AI agent sandboxing solution? I'm genuinely curious what pushes everyone to build their own and launch here Is it one of those tarpit ideas: driven by own need and easy to build?
Depends. Probably not usually. I've thought about this a bunch and I think the serious "threat" here isn't the agent acting maliciously --- though agents will break out of non-hardened sandboxes! --- but rather them exposing some vulnerability that an actual human attacker exploits.
I'd also add that I just don't like the idea in principle that I should have to trust the agent not to act maliciously. If an agent can run rm -rf / in an extreme edge case, theoretically it could also execute a container escape.
Maybe vanishingly unlikely in practice, but it costs me almost nothing to use a VM just in case. It's not impossible that current models would turn out to be poorly behaved, that attackers would publish malicious tutorials targeting LLMs, or that some shadowy figure would run a plausibly deniable attack against me through an LLM API.
security matters if want to demarc where agents can play. running agent inside of strong VM is usually where starts container not enough for that full isolation only sees files you want it to etc
I suppose a lot depends on how and in what environment you're dealing with agents.
Resources might be an issue on Mac if you have bunch of agents running different things, trying to execute code in different containers. But that's the issue of Mac and the way containers are running in a VM there.
Security-wise there were concerns with prompt injection telling agent to execute certain steps to escape from container. Possible, but I'm not aware if there were actually cases of that.
Seems these thing pop up here ever so often. Either using firecracker or docker/containers. How is this different from the other sandboxes?
BTW I love that you got LLM testimonials lol
we've considered docker, firecracker, will add smol to working roster
context <> building something with QEMU
* required has to support LMW+AI (linux/mac/windows + android/ios)
there are scenarios in which we might spin micro vms inside that main vm, which by default is almost always Debian Linux distro with high probability.
one scenario is say ETL vm and AI vm isolated for various things
curious why building another microVM other than sheer joy of building, what smol does better or different, why use smol, etc. (microVMs to avoid etc also fair game :)
Congrats on launching, and great testimonials!
What problem does it solve compared to bazillion code execution sandboxing agents (and containers/VMs)?
Overall, a lot of people are building their own code execution sandboxing agents around containers/VMs. Curious to know what's missing that makes people DIY this?
Here's my list of code execution sandboxing agents launched in the last year alone:
1. E2B 2. AIO Sandbox 3. Sandboxer 4. AgentSphere 5. Yolobox yolo-cage SkillFS ERA Jazzberry Computer Vibekit Daytona Modal Cognitora YepCode Run Compute CLI Fence Landrun Sprites pctx-sandbox pctx Sandbox Agent SDK Lima-devbox OpenServ Browser Agent Playground Flintlock Agent Quickstart Bouvet Sandbox Arrakis Cellmate (ceLLMate) AgentFence Tasker
Is this a common pattern to have an agent request a sandbox? I feel like I'd want the whole agent running in it's own sandbox to begin with. Firecracker does look like a decent solution for that.
I agree. I'm testing https://sprites.dev/ because of that.
The right link is https://github.com/vrn21/bouvet
Great idea that is already implemented as a feature by major AI providers, several well funded startups, countless unfunded startups, and trivially solved per-user with any handful of existing technologies.
Truly baffling its in the top 5 of the front page. My first thought was bot army upvoting but the total points are quite low. That means this is some mod's personal idea of an especially interesting submission?
Having testimonials attributed to Gemini 3 Pro and Claude 4.5 Opus is... interesting. I'm curious what prompt was used to get those quotes.
Cool option, I'm building in the same space. We should chat!
interesting is the idea the agent calls it or just alt to terminal bash etc tool calls hey your tool calls are all microvms, containers, isoshells, raw term, clawd/molt all credentials with weaker and weaker security demarcs?
Can someone elaborate with whats wrong with having containers for sandbox?
It's because containers share the kernel with the host. Generally it's just not considered a security boundary. (Note that containers have come a longer way in the security side btw)
So it's a mostly security thing.
What about VMs? They offer strong isolation, as they don't share kernels, and have long been a foundational piece for multi-tenant computing. Then, why would we put an extra layer on top and rebrand it as an AI agent sandboxing solution? I'm genuinely curious what pushes everyone to build their own and launch here Is it one of those tarpit ideas: driven by own need and easy to build?
But in the context of agents. Does it matter?
Depends. Probably not usually. I've thought about this a bunch and I think the serious "threat" here isn't the agent acting maliciously --- though agents will break out of non-hardened sandboxes! --- but rather them exposing some vulnerability that an actual human attacker exploits.
I'd also add that I just don't like the idea in principle that I should have to trust the agent not to act maliciously. If an agent can run rm -rf / in an extreme edge case, theoretically it could also execute a container escape.
Maybe vanishingly unlikely in practice, but it costs me almost nothing to use a VM just in case. It's not impossible that current models would turn out to be poorly behaved, that attackers would publish malicious tutorials targeting LLMs, or that some shadowy figure would run a plausibly deniable attack against me through an LLM API.
Imo it's even more important in context of agents, if these agents are as good as it's going to get with as much access as we let them.
One could theoretically use a prompt injection attack to exploit a privilege escalation vulnerability on the kernel.
security matters if want to demarc where agents can play. running agent inside of strong VM is usually where starts container not enough for that full isolation only sees files you want it to etc
From what I read others say at some point on HN:
- resources
- security
- setup speed?
I suppose a lot depends on how and in what environment you're dealing with agents.
Resources might be an issue on Mac if you have bunch of agents running different things, trying to execute code in different containers. But that's the issue of Mac and the way containers are running in a VM there.
Security-wise there were concerns with prompt injection telling agent to execute certain steps to escape from container. Possible, but I'm not aware if there were actually cases of that.
Seems these thing pop up here ever so often. Either using firecracker or docker/containers. How is this different from the other sandboxes? BTW I love that you got LLM testimonials lol
I'm building an alternative to firecracker here if you're looking for something wayy different: https://github.com/smol-machines/smolvm
we've considered docker, firecracker, will add smol to working roster
context <> building something with QEMU
* required has to support LMW+AI (linux/mac/windows + android/ios)
there are scenarios in which we might spin micro vms inside that main vm, which by default is almost always Debian Linux distro with high probability.
one scenario is say ETL vm and AI vm isolated for various things
curious why building another microVM other than sheer joy of building, what smol does better or different, why use smol, etc. (microVMs to avoid etc also fair game :)
I focus on different design decisions.
Smolvm is designed to run locally, persistent (stateful), long running (efficiency), and interactive.
Worked with firecracker and other options a lot btw, most of everything is designed for ephemeral serverless workloads.
I needed Mac / win/ Linux / iOS / android for dioxus dev, so I built my own in rust.
https://skyvm.dev/
Given that this is using Firecracker, is it Linux only?
We use a service but it is always nice to have a free option if you need it. Good stuff.
Why is it a problem to use containers?
Anyone have any thoughts on this path if using macOS? Been using it, seems to do the trick pretty well out of the box.
https://developer.apple.com/documentation/Virtualization/run...
This relies on the agent requesting a sandbox... which seems like the fox guarding the hen house, no?
interesting