Mruby has something like that build in, you can create a VM which only has basic data types and control flow, no i/o, rng, time, meta programming or any host access possible simply because most functionality is only available as gems and they simply aren’t loaded. Everything you can do with it should be fully deterministic.
It looks really promising but I would love more examples as to how to actually use this with AI agents. Reading the homepage it is not clear if we are meant to have the Agent spun up and act fully in the sandbox (something like the HTTP example) or do we take the result code message from an AI agent and then run it dynamically (with eval?).
That being said this is useful even if it wasn't for the running AI agent code aspect, being able to limit ram and cpu usage and time outs makes it easier to run coding based games/applications safely (like battle snakes and Leetcode)
Thanks! Got it, I will add more examples for that. Currently you can do both: run dynamically untrusted code with eval, or run fully encapsulated logic (like in the existing examples).
This looks very neat indeed! Are there any plans to adding network limits? Like, you might want to avoid an agent running code that just requests a resource in a loop, or downloads massive amounts of data.
Why go this route? Why Python is more powerful than JS is mostly because of third party plugins like pandas which are excplicitly not supported (C bindings, is this possible to fix?)...
At that point it might be just easier to convince the model to write JS directly
I would love for the component model tooling to reach that level of maturity.
Since the runtime uses standard WASI and not Emscripten, we don't have that seamless dynamic linking yet. It will be interesting to see how the WASI path eventually converges with what Pyodide can do today regarding C-extensions.
I understand your point. I added native Python support because C extensions will eventually become compatible. Also, we might see more libraries built with Rust extensions appearing, which will be much easier to port to Wasm.
The decorator syntax is neat but confusing to me - I would need to understand exactly what it's doing in order to trust it.
I'd find this a lot easier to trust it if had the Python code that runs in WASM as an entirely separate Python file, then it would be very clear to me which bits of code run in WASM.
I'd love that. I want to be able to look at the system and 100% understand which code is running directly and which code is running inside the sandbox.
Mruby has something like that build in, you can create a VM which only has basic data types and control flow, no i/o, rng, time, meta programming or any host access possible simply because most functionality is only available as gems and they simply aren’t loaded. Everything you can do with it should be fully deterministic.
It looks really promising but I would love more examples as to how to actually use this with AI agents. Reading the homepage it is not clear if we are meant to have the Agent spun up and act fully in the sandbox (something like the HTTP example) or do we take the result code message from an AI agent and then run it dynamically (with eval?).
That being said this is useful even if it wasn't for the running AI agent code aspect, being able to limit ram and cpu usage and time outs makes it easier to run coding based games/applications safely (like battle snakes and Leetcode)
Thanks! Got it, I will add more examples for that. Currently you can do both: run dynamically untrusted code with eval, or run fully encapsulated logic (like in the existing examples).
I made a small example that might give you a better idea (it's not eval, but shows how to isolate a specific data processing task): https://github.com/mavdol/capsule/tree/main/examples/javascr...
And yes, you are spot on regarding LeetCode platforms. The resource limits are also designed for that kind of usage.
Would like to see the eval version - the dialogue version just seems like normal code with extra steps?
yeah, the previous example was quite basic. I will write a complete example for that, but here is how you can run dynamic code:
Hope that helps!This looks very neat indeed! Are there any plans to adding network limits? Like, you might want to avoid an agent running code that just requests a resource in a loop, or downloads massive amounts of data.
Thanks! Not yet, but that's a great idea. I could definitely add it to the roadmap.
Why go this route? Why Python is more powerful than JS is mostly because of third party plugins like pandas which are excplicitly not supported (C bindings, is this possible to fix?)...
At that point it might be just easier to convince the model to write JS directly
You can run libraries like Pandas in WebAssembly in Pyodide - in fact Pandas works already. Here's a demo I built with it a while ago: https://tools.simonwillison.net/pyodide-bar-chart
It's not too hard to compile a C extension for Python to a WebAssembly and bundle that in a .so file in a wheel. I did an experiment with that the other day: https://github.com/simonw/tiny-haversine?tab=readme-ov-file#...
I would love for the component model tooling to reach that level of maturity.
Since the runtime uses standard WASI and not Emscripten, we don't have that seamless dynamic linking yet. It will be interesting to see how the WASI path eventually converges with what Pyodide can do today regarding C-extensions.
I understand your point. I added native Python support because C extensions will eventually become compatible. Also, we might see more libraries built with Rust extensions appearing, which will be much easier to port to Wasm.
It seems import to highlight these more. Aren't all the limitations of using this based around their limitations?
componentize-py – Python to WebAssembly Component compilation
+
jco – JavaScript toolchain for WebAssembly Components
I'm curious how Wasi 0.3 cross language components will go for something like this.
I agree; this project looks impressive, but I'm guessing there are some rough edges in the transpilation "magic" that should be called out.
That's the crux of how usable this is going to be for people's use cases, and it's better to document the limitations upfront.
I recreated many Node.js built-ins so compatibility is actually quite extended.
For Python, the main limitation is indeed C extensions. I'm looking for solutions. the move to WASI 0.3 will certainly help with that.
The decorator syntax is neat but confusing to me - I would need to understand exactly what it's doing in order to trust it.
I'd find this a lot easier to trust it if had the Python code that runs in WASM as an entirely separate Python file, then it would be very clear to me which bits of code run in WASM.
Personally: love the decorator pattern after I got used to it :)
Posted this yesterday as well, but seems like a really nice emerging pythonic way to call out to remote infrastructure (see: Modal[1]).
[1]: https://modal.com/docs/examples/hackernews_alerts#defining-t...
Thanks for the feedback! What do you think about running the separate file directly from the decorator?
I'd love that. I want to be able to look at the system and 100% understand which code is running directly and which code is running inside the sandbox.