I really want to love this, but my experience in the first 20 seconds is unfortunately like some of my other experiences coding against Fly APIs, they're broken.
can I live with some rough edges for some personal workflows that only impact me when things break? sure. however, I was thinking about playing with some CI/CD stuff using sprites that would impact our whole team if things broke and I'm really on the fence because of this experience in the first 20 seconds.
Fly team - please put some black box probes or just better testing on the example you give in the quick start. if you document it, test it.
I wish more companies had open issue trackers (some proprietary software have issues on Github for example, but, it doesn't need to be Github, just let people discuss issues in the open)
I'm really excited about https://sprites.dev/ - it hits two of my favourite problems at once:
1. Developer environment sandboxes. This is a cheap and convenient way to run Claude Code / Codex CLI / etc in YOLO mode in a persistent sandboxed VM with a restricted blast radius if something goes wrong.
2. Sandbox API. Fly now have a product that lets me make a simple JSON API call to run untrusted code in a new sandbox. There's even snapshotting support so I can roll back to a known state after running that code.
BTW Simon, I was super happy when I heard on Theo's podcast that he will be encouraging you to monetise your work more. I'm super appreciative of your work and I'm pretty convinced that the more you profit from it, the better the universe will be!!!
I've been having so much fun working on sprites (and working with sprites) the last the several months. There's some neat parts of the Elixir side of this we're going to open source soon.
One of the coolest things about this is that Claude in his environment --- without him asking to --- knows how to drive Sprites. If you ask it to run a server, it will register it as a local service so it survives reboots. Without you asking to, it'll checkpoint when it makes big changes. I think this is kind of freaky.
I can't say enough how, if you're using this like Kurt and Chris have been, you have like, a dozen sleeping Sprites in your Sprite list. If you're not doing anything with them, they're not really costing you anything. When you want to do something new, there's no point figuring out which of your existing Sprites to do it on. Just make a new one.
Always having a sane place to run anything I happen to be doing, without making any decisions, it's a weird feeling.
You pay for the storage you actually use (not the raw capacity). If you build, like, a relatively complicated Python web service with some assets, and all the build deps that go with that, you might be on the hook for, like, 90 cents in a month.
This is seriously cool - it's exactly the DX and API I've been waiting for from sandboxed execution providers.
I'd love to be able to configure the base image/VM in a way that doesn't bundle coding tools or anything else I don't need, and comes with some other binaries installed (I'm more interested in using this as an API for a sandbox use-case I have). Is there a way to do this at the moment / is this on the roadmap?
Another option would be configuring the sprite via checkpoint and then cloning the checkpoint from a base sprite, but I don't see this option anywhere either.
Yes! It would be kinda cool to have the ability to docker-deploy (think the fly method even -- just to get your sprite on its feet the way YOU want it) a base sprite image and then just go from there in the normal sprite way from then on.
> When you start a feature branch on your own, do you create an entirely new development environment to do it?
… yes? We have a few wrapper scripts around worktree operations that copy some docker volumes (pg data, bundle cache, etc.) from the base and spins up an entirely new stack on different ports with a host alias. We don’t have to install any deps beyond that because we copied over the ruby gems bundle cache and we’re using Yarn PnP + “zero installs” for client-side deps.
> There are some important million-person apps, but most of them just destroy civil society, melt our brains, and arrange chauffeurs for individual cheeseburgers.
All the cool technical stuff aside - this, for me, was the standout line of the article
I might have missed this in the docs, but is there a way to fork/clone a sprite, or restore a checkpoint into a new one?
Use cases: set up my preferred env in one sprite and use that as a template for others; or fire off a few independent sprites with claude code exploring alternative solutions, then choose a winner and reap the rest.
It's coming, and it'll make sense how and why next week when I run the "how this shit works" post.
I actually pushed to include it in the launch release. You'd have to ask Kurt why he didn't, but I think the idea is just to get more real-world usage first.
Wow, this looks absolutely fantastic. Can't wait to take it for a spin. I'm actually surprised it isn't seeing more traction here!
In particular, I'm really excited about the extremely fast start up time and checkpointing. I'm curious if anyone knows any alternatives in this space?
> Claude is a hyper-productive five-year-old savant. It’s uncannily smart, wants to stick its finger in every available electrical socket, and works best when you find a way to let it zap itself.
AFAIK fly.io run firecracker and cloud-hypervisor VMs. This seems to have a copy-on-write filesystem underneath.
Given their principled take on only trusting full-VM boundaries, I doubt they moved any of the storage stack into the untrusted VM.
So maybe a virtio-block device passing through discard to some underlying CoW storage stack, or maybe virtio-fs if it's running on ch instead of fc? Would be interesting to hear more about the underlying design choices and trade-offs.
Edit: from their website, "Since it's just ext4, you won't run into weird edge cases like you might with NFS or FUSE mounts. You can happily use shared memory files, for example, so you can run SQLite in all its modes." So it's a virtio block device supporting discard that's exposed to the VM. Interesting; fc doesn't support virtio discard passthrough, and support for ch is still in progress...
I have a post coming next week about the guts of this thing, but I'm curious why you think we'd avoid running the storage stack inside the VM. From my perspective that's safer than running it outside the VM.
My impression is that you (very reasonably) treat anything inside the VM as untrusted. If you want trusted rollback, presumably that implies that the VM can't have any ability to tamper with the snapshot?
But maybe you have parts of the stack that don't need to be trusted inside the VM somehow? Looking forward to the article.
* Automatic spin-down scale-to-zero, so you're not paying for it when it's not in use.
If you're using these like we are internally, you've got like 2 dozen of them sitting around in the background sleeping. They're BIC disposable computers. "When in doubt just make another one."
"Containers" are that, and fast, in part because they share kernels, so there's no serious rebooting happening. But the consequence of that design is you share a kernel with untrusted cotenants.
And then there's just the idea of being able to pull these out of the sky literally whenever you want one. If you want to try something new out real quick, it makes no sense to figure out which of your existing Sprites to use. Just make a new one. If you're a little OCD, like I am, every once in awhile you can go prune, if you really care.
The post says "hardware isolated" but below in the sandbox it says firecracker, which I thought were supposed to be a secure way to run containers from multiple tenants on a single host. Also I thought Fly machines were already using firecracker.
I'm having trouble understanding the difference to Fly machines. If you spin up a Debian container on a machine with a persistent volume, doesn't that have everything this does? Is this about providing a layer of useful configuration/management software on top?
something that isn’t clear to me: what’s the billing when i’m not actively using a sprite? does that go to zero as well, or am i still being billed for storage?
If it's similar to cloudflare, then it should be usage based. That is you only pay for what is active. (ie: if you are running a task that is waiting on network for 1 hour, you don't pay for cpu but your app is loaded and you are paying for memory). So if your app is dormant (not using cpu or memory), you only pay for the storage you are using.
yeah reading further into the docs it looks like that’s the model. storage is pretty cheap, $.00068/gb-hr, so a 100GB disk runs you about 1.6 cents per day.
That's roughly what Cloudflare containers are right? (with migrations being the checkpoints?). Cloudflare containers are also nearly instant and have scale-to-zero pricing. The only difference here is the CLI?
Your pricing looks competitive on compute but roughly 4-5 times more expensive on memory and double on storage.
I wonder the same thing. What’s so different than your own vps and using lxd to create a container. Make two bash aliases and wow you can go in and out quickly and recreate it with one command.
If you have an LXD setup working for your own workloads that's working well for you, that's awesome. Why would we want to talk you out of that? Fundamentally you're getting at the difference between "elastic" cloud services and personal infrastructure. Personal infra is great!
If it helps: Jerome has been working for a couple months on a local, open-source Rust version of Sprites, so you can use the same DX with your own infrastructure. We just think this is the right "shape" for modern sandboxes, wherever you actually run them.
fly.io is doing really good work. I've super enjoyed building our product on their platform. I love fly-replay combined with super fast start-up.
I've been thinking a lot about how to run agents (and skills) securely while giving them a lot of powerful capabilities.
I recently used their macaroons library to turn arbitrary API keys (e.g. for stripe's API) into macaroons. I route requests for an upstream host (like stripe) through Envoy as a mitm proxy which injects the real creds after verifying the macaroon.
It is such a powerful pattern. I'm always worried about leaking sensitive keys through prompt injection attacks (or just sending them to anthropic), but in this model you can attenuate the keys (both capabilities & validity window) client side. The Envoy proxy lives inside my flycast network so it can't be accessed externally.
It would be so cool if fly built something like this into sprites.dev (though I can see how it would be spooky to have fly install their own certs for stripe, etc...)
My use case is very similar, but I wanted a transparent proxy so I could run unmodified scripts. It is a tricky design decision though.
I also mount a little fuse filesystem that mints macaroon on read (with a shorter lifetime, probably inspired by y'all but i forget from where).
I work on realtime collaboration of markdown files (currently in Obsidian), which has become a shared-context substrate for agents, skills, etc.. Our own company workspace has skills that have scoped access to fly, stripe, gmail, etc. We're definitely drinking the file-over-app personal-software-for-teams Kool-Aid, so the problem space for us includes access control and auditing.
We have enough control over the execution environment in a Sprite (unlike a Fly Machine, where the implied Linux contract we have with our users gets in the way) that we can trivially hide explicit proxies.
We can also attach Macaroons to Fly Machines and Sprites for configurable ambient privileges, something I've wanted us to expose as a feature for a very long time.
Awesome, i look forward to that. I think that could be a major differentiator for sprites. I wish i could work on that problem at fly.io scale.
What is the contract with sprites? Is it just built-with-linux but not promising Linux? Or is it more like a machine but y'all control the container image?
There's no "formal" contract in either place but people running on Fly Machines expect that there's nothing at all between them and the kernel, and we don't have that expectation in Sprites; we can do whatever we want. :)
I don't want to get too far into the rest of the details only because I'm writing this up for next week. They're not that interesting technically, but they're a really big deal for us in other ways.
This seems cool but maybe not for a production setting requiring concurrency? I just signed up on PAYG which offers 3 concurrent sprites. I only see an option to upgrade to 10 concurrent sprites.
Without getting into Kurt's galaxy-brained take on the declining importance of "production" in a post-AI world, I'd say: yeah, run prod apps on Fly Machines, for more predictable performance, scaling, and pricing. Do exploratory computing --- "figuring out what you'd run on a Fly Machine" --- in Sprites.
The sprite installer got stuck after "Installed to ..." for me. After waiting a few minutes I just ctrl+ced and looked at what it does after and manually ran "sprite auth setup --token <token>" and that seems to just hang for me.
I thought fly.io snapshots weren't guaranteed to stick around? Although I can can't find the docs mentioning it, but i checked within the last few months... maybe they changed it?
I want something like this, but running on my own box. I now have a Linux box with plenty of RAM and storage under my desk. (It happens to be an NVIDIA DGX Spark, but I'm not really interested in passing the GPU through to these sandboxed VMs; I know that's not practical anyway.) Maybe I'll see if I can hack together a local solution like this using Firecracker.
Playing around with this for a small amount of time, it is very neat but also there are a bunch of things that are unclear / undocumented (I assume the documentation is coming so I'm not faulting them for it not being there yet).
Some things that are unclear:
- How should I auth to github? sprite console doesn't use ssh (afaik) so I guess not agent forwarding?
- What on machine api's are available? Can I use the fly oidc provider[1]? There's a /.sprite/api.sock but curl'ing /v1/tokens/oidc gets a 404.
- How much is it going to cost me? I know there is pricing but its hard to figure out what actual usage would be like. Also I don't see any usage info in the webui right now.
Don't think of this as in any way connected to the Fly Machines API. For now, just take it on its own terms. We'll have an open-source local version of it relatively soon, if that clarifies anything.
To follow up on this a bit, something that I really want is a way to build and launch apps from an llm really easily. I am imagining and environment with a database, object storage, and a publicly reachable webserver. I think this could be that with OIDC auth to an s3 bucket and litestream.
I was previously thinking about doing the same thing on my homeserver with tailscale to expose the web interface publicly and tailscale oidc auth to an s3 bucket for object storage.
i believe the .sprite dir has some stuff to help claude answer those questions. haven’t done it myself but my friend said he was able to get claude to set it all up for him (yolo mode helps) including connecting to github.
You can do this now without an MCP, by auth'ing the `sprite` command inside of a Sprite and telling Claude to go document it for you. You can do things like "make me three versions of this feature on three different Sprites so I can compare them". It is spooky how easy it is to teach agents this stuff.
Like it, a lot. I think the future of software is going to be unimaginably dynamic. Maybe apps will not have statically defined feature sets, they will adjust themselves around what the user wants and the data it has access to. I’m not entirely sure what that looks like yet, but things like this are a step in that direction.
> I think the future of software is going to be unimaginably dynamic.
>...I’m not entirely sure what that looks like yet, but things like this are a step in that direction.
This made me stop and think for a moment as to what this would look like as well. I'm having trouble finding it, but I think there was a post by Joe Armstrong (of Erlang) that talked about globally (as in across system boundaries, not global as in global variable) addressable functions?
I spun one up, started a server on port 8080, ran `sprite url`, it gave me a URL, that URL just has `{ "error": "unauthorized" }`. How am I supposed to access it?
Oh, thanks, that works. ([edit] rewrote this whole post) I guess I need to install my own tunneling into the VM to do web development on it, but that's not so bad. The lack of regional support is crippling, because whatever region you put me in is ~200ms from me and the typing lag is terrible.
I'd love to adopt this for all my development (which I currently do using rented cloud instances, so I'm pretty comfortable with the remote development paradigm). I'm especially excited about the snapshot/clone pattern, and have (this past week) been researching solutions for exactly this problem.
Hope you launch multiple regions for this ASAP. Will be watching.
If you `sprite console` to it, it'll forward any ports you open to localhost. You can tunnel almost everything through the CLI with the `sprite proxy` command.
sprites.dev looks very interesting to me.
Is there a way to set up a limit to how much scaling a sprite can get, or to set a spending limit?
I wouldn't want to spin something up, and then be surprised by an unexpectedly high bill.
This has been in the works for quite awhile here. We put a long bet on "slow create fast start/stop" --- which is a really interesting and useful shape for execution environments --- but it didn't make sense to sandboxers, so "fast create" has been the White Whale at Fly.io for over a year.
Not really. One of the primary features of sprites.dev that I don't see anywhere on exe.dev is a fast way to create and restore checkpoints, like a git repo for your entire VM.
This is needed for sandboxes if you don't want to throw them away and start over when something goes wrong.
With sprites.dev you can create an additional checkpoint and then turn Claude Code (or your preferred agent) loose to do anything. Even if it burns down the sandbox you can just restore a checkpoint in about a second.
[exe.dev co-founder here] If you are curious, we have a `clone` command coming soon for sub-section creation of a new VM out of an existing VM. This is our first pass at checkpointing, rather than introducing an independent `snapshot` noun, you can keep a VM around as the snapshot.
We realize that is not going to cover all the business cases we have been discussing with customers and plan to introduce a snapshot concept (in particular for rewinding the state of a VM to an automatic backup), but we have a lot of FS work underway before we can launch it. There are some other things we want out of our VMs that we cannot do using conventional cloud techniques, so we have code to write.
Yes that’s certainly a great feature and they don’t have it currently. For what it’s worth, they do have a teaser about “Persistent disks with some really interesting work coming soon.”
I have just now learned about exe.dev and it looks awesome.
I really hate that modern development means not having persistent disk. I’m glad there are new options coming out which let you do this in and easier way than managing my own EC2 instances!
something simpler I've did, in the same spirit: LXC containers (using Incus) in a VM. LXC containers look and feel like VMs, but are very lightweight. And the VM they all run in provide the hard sandbox.
and when I spin up a new LXC container cloud-init sets it up with the agents and my repos inside
Would I think of this as an EC2 instance which automatically and quickly scales to zero, with pricing only for resources consumed? (CPU and RAM when up, and disk all the time?)
It's a fast starting and fast pausing persistent VM, with a ton of built in developer tools (including a preconfigured Claude Code) and an extra JSON API for executing commands within it so you can treat it as a sandbox.
How exactly can code agents make use of this? You install claude code inside a Sprite and run it there? Do you also need to put all your codebase in this sprite?
Claude Code is already in the Sprite; just create one and type "claude". But they have an API and Claude (or Gemini or Codex) can use them remotely too. They're disposable computers. Use them however you want.
So this is neat and useful and I think will/should get traction.
So let's say sprite is my building/dev ground floor. I get my thing/app to where I want it, but at the end of the day I think my thing/app is so awesome that it should be a production app for the whole world, and, I want to actually deploy it on fly, say.
Have you guys thought about that workflow, and what it might take to push button/migrate a sprite app over to fly?
It depends on which Fly person you talk to. If you talk to Kurt he'll try to sell you on his crazy dream of how all software is going to be malleable and "prod" doesn't mean anything anymore. If you ask me: tell Claude to make a Dockerfile of the current state of your Sprite, and then deploy it as a Fly Machine. It's a good question, and we're working out how the transition from Sprite to Fly Machine works, but that's how I'd do it today.
I don't think we're going to do anything new with GPUs any time soon.
I'm not really sure I get the value of these being remotely hosted. We're writing code on super powerful machines with hypervisors built in.
My libvirt setup does this right now, I have a little dumb cli I wrote that lets me create, start, stop, save, restore, and destroy preconfigured machines. I use it for testing provisioning scripts and playbooks. You get the full cloud experience by including a cloud-init ISO so you can ssh to it the moment it boots with my key. Didn't realize I was at the frontier of computing paradigms.
Don't get me wrong the interface fly has is super nice but it feels like the endgame isn't remote hosted computers but a nice user-friendly interface (i.e. what docker did) but it's for persistent local VMs.
I really want to love this, but my experience in the first 20 seconds is unfortunately like some of my other experiences coding against Fly APIs, they're broken.
https://sprites.dev/api has this command:
$ curl -X POST "https://api.sprites.dev/v1/sprites" \ -H "Authorization: Bearer $SPRITES_TOKEN" \ -d '{"name": "my-sprite"}'
which responds with
{"error":"name is required"}
if you use the request body in the full "Create Sprite" documentation at https://sprites.dev/api/sprites#create then it does work.
can I live with some rough edges for some personal workflows that only impact me when things break? sure. however, I was thinking about playing with some CI/CD stuff using sprites that would impact our whole team if things broke and I'm really on the fence because of this experience in the first 20 seconds.
Fly team - please put some black box probes or just better testing on the example you give in the quick start. if you document it, test it.
Can this issue be reported?
I wish more companies had open issue trackers (some proprietary software have issues on Github for example, but, it doesn't need to be Github, just let people discuss issues in the open)
I'm really excited about https://sprites.dev/ - it hits two of my favourite problems at once:
1. Developer environment sandboxes. This is a cheap and convenient way to run Claude Code / Codex CLI / etc in YOLO mode in a persistent sandboxed VM with a restricted blast radius if something goes wrong.
2. Sandbox API. Fly now have a product that lets me make a simple JSON API call to run untrusted code in a new sandbox. There's even snapshotting support so I can roll back to a known state after running that code.
I wrote more a bunch more about this here: https://simonwillison.net/2026/Jan/9/sprites-dev/
I know you know this, as you posted it, but readers might want to look at this related thread:
Fly's Sprites.dev addresses dev environment sandboxes and API sandboxes together - https://news.ycombinator.com/item?id=46561089 - Jan 2026 (10 comments)
I have found container-use to be super useful for this.
https://container-use.com/quickstart
BTW Simon, I was super happy when I heard on Theo's podcast that he will be encouraging you to monetise your work more. I'm super appreciative of your work and I'm pretty convinced that the more you profit from it, the better the universe will be!!!
I've been having so much fun working on sprites (and working with sprites) the last the several months. There's some neat parts of the Elixir side of this we're going to open source soon.
Also check out the 5 min demo we put out where I walk thru some sprite basics: https://www.youtube.com/watch?v=7BfTLlwO4hw
One of the coolest things about this is that Claude in his environment --- without him asking to --- knows how to drive Sprites. If you ask it to run a server, it will register it as a local service so it survives reboots. Without you asking to, it'll checkpoint when it makes big changes. I think this is kind of freaky.
I can't say enough how, if you're using this like Kurt and Chris have been, you have like, a dozen sleeping Sprites in your Sprite list. If you're not doing anything with them, they're not really costing you anything. When you want to do something new, there's no point figuring out which of your existing Sprites to do it on. Just make a new one.
Always having a sane place to run anything I happen to be doing, without making any decisions, it's a weird feeling.
Do we pay a storage penalty for inactive sprites?
You pay for the storage you actually use (not the raw capacity). If you build, like, a relatively complicated Python web service with some assets, and all the build deps that go with that, you might be on the hook for, like, 90 cents in a month.
Right that makes sense thank you
This is seriously cool - it's exactly the DX and API I've been waiting for from sandboxed execution providers.
I'd love to be able to configure the base image/VM in a way that doesn't bundle coding tools or anything else I don't need, and comes with some other binaries installed (I'm more interested in using this as an API for a sandbox use-case I have). Is there a way to do this at the moment / is this on the roadmap?
Another option would be configuring the sprite via checkpoint and then cloning the checkpoint from a base sprite, but I don't see this option anywhere either.
Yes! It would be kinda cool to have the ability to docker-deploy (think the fly method even -- just to get your sprite on its feet the way YOU want it) a base sprite image and then just go from there in the normal sprite way from then on.
> When you start a feature branch on your own, do you create an entirely new development environment to do it?
… yes? We have a few wrapper scripts around worktree operations that copy some docker volumes (pg data, bundle cache, etc.) from the base and spins up an entirely new stack on different ports with a host alias. We don’t have to install any deps beyond that because we copied over the ruby gems bundle cache and we’re using Yarn PnP + “zero installs” for client-side deps.
Wait - you have a repository with a dev environment, and now that you want a new feature branch, you’re creating an entirely new dev environment?
Maybe I’ve been isolated from The World for too long, but this sounds … unhealthy.
> There are some important million-person apps, but most of them just destroy civil society, melt our brains, and arrange chauffeurs for individual cheeseburgers.
All the cool technical stuff aside - this, for me, was the standout line of the article
I might have missed this in the docs, but is there a way to fork/clone a sprite, or restore a checkpoint into a new one?
Use cases: set up my preferred env in one sprite and use that as a template for others; or fire off a few independent sprites with claude code exploring alternative solutions, then choose a winner and reap the rest.
It's coming, and it'll make sense how and why next week when I run the "how this shit works" post.
I actually pushed to include it in the launch release. You'd have to ask Kurt why he didn't, but I think the idea is just to get more real-world usage first.
Wow, this looks absolutely fantastic. Can't wait to take it for a spin. I'm actually surprised it isn't seeing more traction here!
In particular, I'm really excited about the extremely fast start up time and checkpointing. I'm curious if anyone knows any alternatives in this space?
> Claude is a hyper-productive five-year-old savant. It’s uncannily smart, wants to stick its finger in every available electrical socket, and works best when you find a way to let it zap itself.
This alone was worth the upvote!
AFAIK fly.io run firecracker and cloud-hypervisor VMs. This seems to have a copy-on-write filesystem underneath.
Given their principled take on only trusting full-VM boundaries, I doubt they moved any of the storage stack into the untrusted VM.
So maybe a virtio-block device passing through discard to some underlying CoW storage stack, or maybe virtio-fs if it's running on ch instead of fc? Would be interesting to hear more about the underlying design choices and trade-offs.
Edit: from their website, "Since it's just ext4, you won't run into weird edge cases like you might with NFS or FUSE mounts. You can happily use shared memory files, for example, so you can run SQLite in all its modes." So it's a virtio block device supporting discard that's exposed to the VM. Interesting; fc doesn't support virtio discard passthrough, and support for ch is still in progress...
I have a post coming next week about the guts of this thing, but I'm curious why you think we'd avoid running the storage stack inside the VM. From my perspective that's safer than running it outside the VM.
My impression is that you (very reasonably) treat anything inside the VM as untrusted. If you want trusted rollback, presumably that implies that the VM can't have any ability to tamper with the snapshot?
But maybe you have parts of the stack that don't need to be trusted inside the VM somehow? Looking forward to the article.
Safer from what? It depends whether you're protecting the infra or the data.
They're closely linked; protecting the infra is protecting the data.
On one hand it sounds cool. On the other, I feel like I missed it.
Is this just a fancy VPS like digital ocean with, https endpoint, snapshot and restore?
(Same thing goes for exe.dev)
Yes, plus:
* Near-instant creation
* Automatic spin-down scale-to-zero, so you're not paying for it when it's not in use.
If you're using these like we are internally, you've got like 2 dozen of them sitting around in the background sleeping. They're BIC disposable computers. "When in doubt just make another one."
I see.
Also "containers" always had the option to attach durable storage via bind mounts.
I still get confused by the "this isn't containers" but it's kind of similar.
Maybe I am just too caught up in semantics.
A VPS that is instant to boot, super simple automatic routing and https proxy, with snapshot and durable is a win regardless.
"Containers" are that, and fast, in part because they share kernels, so there's no serious rebooting happening. But the consequence of that design is you share a kernel with untrusted cotenants.
And then there's just the idea of being able to pull these out of the sky literally whenever you want one. If you want to try something new out real quick, it makes no sense to figure out which of your existing Sprites to use. Just make a new one. If you're a little OCD, like I am, every once in awhile you can go prune, if you really care.
The post says "hardware isolated" but below in the sandbox it says firecracker, which I thought were supposed to be a secure way to run containers from multiple tenants on a single host. Also I thought Fly machines were already using firecracker.
I'm having trouble understanding the difference to Fly machines. If you spin up a Debian container on a machine with a persistent volume, doesn't that have everything this does? Is this about providing a layer of useful configuration/management software on top?
Subtle to explain. I'll explain better later this week. For now though, just know: every Sprite is under the hood a KVM VM.
something that isn’t clear to me: what’s the billing when i’m not actively using a sprite? does that go to zero as well, or am i still being billed for storage?
If it's similar to cloudflare, then it should be usage based. That is you only pay for what is active. (ie: if you are running a task that is waiting on network for 1 hour, you don't pay for cpu but your app is loaded and you are paying for memory). So if your app is dormant (not using cpu or memory), you only pay for the storage you are using.
yeah reading further into the docs it looks like that’s the model. storage is pretty cheap, $.00068/gb-hr, so a 100GB disk runs you about 1.6 cents per day.
Note you're paying for what you use, not the capacity currently allocated to your Sprite.
That's roughly what Cloudflare containers are right? (with migrations being the checkpoints?). Cloudflare containers are also nearly instant and have scale-to-zero pricing. The only difference here is the CLI?
Your pricing looks competitive on compute but roughly 4-5 times more expensive on memory and double on storage.
Basically endgame VPS. Instant creation, snapshotting, restore. Actually quite impressive even if you don't buy the whole Claude spiel.
I wonder the same thing. What’s so different than your own vps and using lxd to create a container. Make two bash aliases and wow you can go in and out quickly and recreate it with one command.
If you have an LXD setup working for your own workloads that's working well for you, that's awesome. Why would we want to talk you out of that? Fundamentally you're getting at the difference between "elastic" cloud services and personal infrastructure. Personal infra is great!
If it helps: Jerome has been working for a couple months on a local, open-source Rust version of Sprites, so you can use the same DX with your own infrastructure. We just think this is the right "shape" for modern sandboxes, wherever you actually run them.
fly.io is doing really good work. I've super enjoyed building our product on their platform. I love fly-replay combined with super fast start-up.
I've been thinking a lot about how to run agents (and skills) securely while giving them a lot of powerful capabilities.
I recently used their macaroons library to turn arbitrary API keys (e.g. for stripe's API) into macaroons. I route requests for an upstream host (like stripe) through Envoy as a mitm proxy which injects the real creds after verifying the macaroon.
It is such a powerful pattern. I'm always worried about leaking sensitive keys through prompt injection attacks (or just sending them to anthropic), but in this model you can attenuate the keys (both capabilities & validity window) client side. The Envoy proxy lives inside my flycast network so it can't be accessed externally.
It would be so cool if fly built something like this into sprites.dev (though I can see how it would be spooky to have fly install their own certs for stripe, etc...)
If you read Ben Toews work on the tokenizer you have a good sense of where I want Sprites to go with key leaks and prompt injection:
https://fly.io/blog/tokenized-tokens/
Awesome stuff! Thanks for the reply.
Tokenizer is an explicit proxy though right?
My use case is very similar, but I wanted a transparent proxy so I could run unmodified scripts. It is a tricky design decision though.
I also mount a little fuse filesystem that mints macaroon on read (with a shorter lifetime, probably inspired by y'all but i forget from where).
I work on realtime collaboration of markdown files (currently in Obsidian), which has become a shared-context substrate for agents, skills, etc.. Our own company workspace has skills that have scoped access to fly, stripe, gmail, etc. We're definitely drinking the file-over-app personal-software-for-teams Kool-Aid, so the problem space for us includes access control and auditing.
Love your work :)
We have enough control over the execution environment in a Sprite (unlike a Fly Machine, where the implied Linux contract we have with our users gets in the way) that we can trivially hide explicit proxies.
We can also attach Macaroons to Fly Machines and Sprites for configurable ambient privileges, something I've wanted us to expose as a feature for a very long time.
Awesome, i look forward to that. I think that could be a major differentiator for sprites. I wish i could work on that problem at fly.io scale.
What is the contract with sprites? Is it just built-with-linux but not promising Linux? Or is it more like a machine but y'all control the container image?
There's no "formal" contract in either place but people running on Fly Machines expect that there's nothing at all between them and the kernel, and we don't have that expectation in Sprites; we can do whatever we want. :)
I don't want to get too far into the rest of the details only because I'm writing this up for next week. They're not that interesting technically, but they're a really big deal for us in other ways.
Great, i look forward to reading it.
Did you write up anything about this? Is this off the shelf behavior for Envoy or did you create this API yourself?
I can open source it next week when i get a chance.
This seems cool but maybe not for a production setting requiring concurrency? I just signed up on PAYG which offers 3 concurrent sprites. I only see an option to upgrade to 10 concurrent sprites.
Without getting into Kurt's galaxy-brained take on the declining importance of "production" in a post-AI world, I'd say: yeah, run prod apps on Fly Machines, for more predictable performance, scaling, and pricing. Do exploratory computing --- "figuring out what you'd run on a Fly Machine" --- in Sprites.
The sprite installer got stuck after "Installed to ..." for me. After waiting a few minutes I just ctrl+ced and looked at what it does after and manually ran "sprite auth setup --token <token>" and that seems to just hang for me.
I thought fly.io snapshots weren't guaranteed to stick around? Although I can can't find the docs mentioning it, but i checked within the last few months... maybe they changed it?
More complicated than that, but with respect to Sprites --- this is a totally new stack.
I want something like this, but running on my own box. I now have a Linux box with plenty of RAM and storage under my desk. (It happens to be an NVIDIA DGX Spark, but I'm not really interested in passing the GPU through to these sandboxed VMs; I know that's not practical anyway.) Maybe I'll see if I can hack together a local solution like this using Firecracker.
That's coming. It's what Jerome has been working on these past few months.
Maybe bend smolvm to your needs?
Playing around with this for a small amount of time, it is very neat but also there are a bunch of things that are unclear / undocumented (I assume the documentation is coming so I'm not faulting them for it not being there yet).
Some things that are unclear:
- How should I auth to github? sprite console doesn't use ssh (afaik) so I guess not agent forwarding?
- What on machine api's are available? Can I use the fly oidc provider[1]? There's a /.sprite/api.sock but curl'ing /v1/tokens/oidc gets a 404.
- How much is it going to cost me? I know there is pricing but its hard to figure out what actual usage would be like. Also I don't see any usage info in the webui right now.
[1]: https://fly.io/blog/oidc-cloud-roles/
Don't think of this as in any way connected to the Fly Machines API. For now, just take it on its own terms. We'll have an open-source local version of it relatively soon, if that clarifies anything.
To follow up on this a bit, something that I really want is a way to build and launch apps from an llm really easily. I am imagining and environment with a database, object storage, and a publicly reachable webserver. I think this could be that with OIDC auth to an s3 bucket and litestream.
I was previously thinking about doing the same thing on my homeserver with tailscale to expose the web interface publicly and tailscale oidc auth to an s3 bucket for object storage.
I have a Sprite with an auth token to an isolated Sprite org, it works really well for this.
SQLite works great for my apps. I haven't needed object storage yet, storing files on disk is enough.
i believe the .sprite dir has some stuff to help claude answer those questions. haven’t done it myself but my friend said he was able to get claude to set it all up for him (yolo mode helps) including connecting to github.
Now, please make it easy to control network egress!
It'd cool to create a MCP for this so you can have your agents run persistent code/other agents.
This is a large pain point today if you aren't technical, most of the chat interfaces just let you create frontend only apps.
You can do this now without an MCP, by auth'ing the `sprite` command inside of a Sprite and telling Claude to go document it for you. You can do things like "make me three versions of this feature on three different Sprites so I can compare them". It is spooky how easy it is to teach agents this stuff.
> Stop killing your sandboxes every time you use them.
Fo people do this? I’ve never heard of it.
Like it, a lot. I think the future of software is going to be unimaginably dynamic. Maybe apps will not have statically defined feature sets, they will adjust themselves around what the user wants and the data it has access to. I’m not entirely sure what that looks like yet, but things like this are a step in that direction.
> I think the future of software is going to be unimaginably dynamic.
>...I’m not entirely sure what that looks like yet, but things like this are a step in that direction.
This made me stop and think for a moment as to what this would look like as well. I'm having trouble finding it, but I think there was a post by Joe Armstrong (of Erlang) that talked about globally (as in across system boundaries, not global as in global variable) addressable functions?
I spun one up, started a server on port 8080, ran `sprite url`, it gave me a URL, that URL just has `{ "error": "unauthorized" }`. How am I supposed to access it?
sprite url update --auth public
It requires your api token by default.
Do we handle our own certs or do you have a proxy in front of the sprites that can do auto ssl stuff?
We handle all the SSL stuff. Sprites run on the same Anycast network with the same control plane as Fly Machines, which are built for srs bzns.
Oh, thanks, that works. ([edit] rewrote this whole post) I guess I need to install my own tunneling into the VM to do web development on it, but that's not so bad. The lack of regional support is crippling, because whatever region you put me in is ~200ms from me and the typing lag is terrible.
I'd love to adopt this for all my development (which I currently do using rented cloud instances, so I'm pretty comfortable with the remote development paradigm). I'm especially excited about the snapshot/clone pattern, and have (this past week) been researching solutions for exactly this problem.
Hope you launch multiple regions for this ASAP. Will be watching.
If you `sprite console` to it, it'll forward any ports you open to localhost. You can tunnel almost everything through the CLI with the `sprite proxy` command.
sprites.dev looks very interesting to me. Is there a way to set up a limit to how much scaling a sprite can get, or to set a spending limit? I wouldn't want to spin something up, and then be surprised by an unexpectedly high bill.
What is the criteria for a sprite being "idle"? Is it no network activity or is it cpu based?
It stays awake if you have an open connection (like sprite console) or an exec session if running and producing stdout.
You can specify a max exec time for a process when you launch it via the API.
Looks like it's no network activity for 30 seconds.
This is amazing. Great job Fly team!
Hmm, so even just doing a simple ls -la on the home dir is occasionally taking ~10s. Other times, it's instant (I'm on a stable 1 Gbps connection).
Have been experiencing intermittent connection drops as well.
This sounds great and it's roughly what exe.dev is doing too. Coincidence?
This has been in the works for quite awhile here. We put a long bet on "slow create fast start/stop" --- which is a really interesting and useful shape for execution environments --- but it didn't make sense to sandboxers, so "fast create" has been the White Whale at Fly.io for over a year.
Not really. One of the primary features of sprites.dev that I don't see anywhere on exe.dev is a fast way to create and restore checkpoints, like a git repo for your entire VM.
This is needed for sandboxes if you don't want to throw them away and start over when something goes wrong.
With sprites.dev you can create an additional checkpoint and then turn Claude Code (or your preferred agent) loose to do anything. Even if it burns down the sandbox you can just restore a checkpoint in about a second.
[exe.dev co-founder here] If you are curious, we have a `clone` command coming soon for sub-section creation of a new VM out of an existing VM. This is our first pass at checkpointing, rather than introducing an independent `snapshot` noun, you can keep a VM around as the snapshot.
We realize that is not going to cover all the business cases we have been discussing with customers and plan to introduce a snapshot concept (in particular for rewinding the state of a VM to an automatic backup), but we have a lot of FS work underway before we can launch it. There are some other things we want out of our VMs that we cannot do using conventional cloud techniques, so we have code to write.
Exe.dev is very cool.
Yes that’s certainly a great feature and they don’t have it currently. For what it’s worth, they do have a teaser about “Persistent disks with some really interesting work coming soon.”
https://blog.exe.dev/meet-exe.dev
I have just now learned about exe.dev and it looks awesome.
I really hate that modern development means not having persistent disk. I’m glad there are new options coming out which let you do this in and easier way than managing my own EC2 instances!
something simpler I've did, in the same spirit: LXC containers (using Incus) in a VM. LXC containers look and feel like VMs, but are very lightweight. And the VM they all run in provide the hard sandbox.
and when I spin up a new LXC container cloud-init sets it up with the agents and my repos inside
Could you clarify what this actually is?
Would I think of this as an EC2 instance which automatically and quickly scales to zero, with pricing only for resources consumed? (CPU and RAM when up, and disk all the time?)
Yeah that's about right.
It's a fast starting and fast pausing persistent VM, with a ton of built in developer tools (including a preconfigured Claude Code) and an extra JSON API for executing commands within it so you can treat it as a sandbox.
You may find my writeup here useful: https://simonwillison.net/2026/Jan/9/sprites-dev/
How exactly can code agents make use of this? You install claude code inside a Sprite and run it there? Do you also need to put all your codebase in this sprite?
Claude Code is already in the Sprite; just create one and type "claude". But they have an API and Claude (or Gemini or Codex) can use them remotely too. They're disposable computers. Use them however you want.
Will you guys get mad if I try to do something like transcription with a tiny model on a sprite?
You can use git to pull down code from a remote repo
So this is neat and useful and I think will/should get traction.
So let's say sprite is my building/dev ground floor. I get my thing/app to where I want it, but at the end of the day I think my thing/app is so awesome that it should be a production app for the whole world, and, I want to actually deploy it on fly, say.
Have you guys thought about that workflow, and what it might take to push button/migrate a sprite app over to fly?
Also, any plans for GPU sprites?
It depends on which Fly person you talk to. If you talk to Kurt he'll try to sell you on his crazy dream of how all software is going to be malleable and "prod" doesn't mean anything anymore. If you ask me: tell Claude to make a Dockerfile of the current state of your Sprite, and then deploy it as a Fly Machine. It's a good question, and we're working out how the transition from Sprite to Fly Machine works, but that's how I'd do it today.
I don't think we're going to do anything new with GPUs any time soon.
Unsure if it's an intended typo: `rm -rf $HMOE/bin`
I ran the command to check and it erased /bin and now my sprite is busted. But I was able to restore from a checkpoint and it's all good.
Intended typo so you can see restore happen ;)
I'm not really sure I get the value of these being remotely hosted. We're writing code on super powerful machines with hypervisors built in.
My libvirt setup does this right now, I have a little dumb cli I wrote that lets me create, start, stop, save, restore, and destroy preconfigured machines. I use it for testing provisioning scripts and playbooks. You get the full cloud experience by including a cloud-init ISO so you can ssh to it the moment it boots with my key. Didn't realize I was at the frontier of computing paradigms.
Don't get me wrong the interface fly has is super nice but it feels like the endgame isn't remote hosted computers but a nice user-friendly interface (i.e. what docker did) but it's for persistent local VMs.
Sure, but plenty of users don't want to have to do/configure all that locally, sorta like I want shared hosting vs my own VPS as a sort of analogy.
> I have kids. They have devices. I wanted some control over them. So I did what many of you would do in my situation: I vibe-coded an MDM.
Wait, what?