While building the execution runtime for our AI tool ecosystem (Gace), we originally planned to rely on local execution—similar to how OpenClaw handles things.
But the further we got, the more we realized that treating a user's laptop as a 24/7 background server for LLMs is an architectural anti-pattern. Two things killed it for us:
Latency: ReAct loops bouncing back and forth over home Wi-Fi ruin the UX.
Security: Running untrusted community scripts locally without absolute sandboxing is terrifying.
So we pivoted. We built a cloud sandbox using quickjs-emscripten that executes JS tools in strict isolates with 25ms cold starts. By putting the executor in the same data center as the LLM, the multi-step latency tax practically disappears. (For eventual local file access, we're building a dumb, permission-gated daemon rather than a heavy local execution engine).
I wrote down our technical reasoning on why we think the current "local-first" agent trend is structurally flawed.
I'd love to hear your thoughts.
We're already seeing some "spin up OpenClaw VM with one click" solutions (I mean user friendly VPS wrappers).
Although I don't think OpenClaw will become cloud-native solution. Especially taken into account OpenClaw author's vision and how large the codebase become.
Not to mention such pivot would be impossible without deprecating skills (which are built specifically for current architecture)
Hi HN,
While building the execution runtime for our AI tool ecosystem (Gace), we originally planned to rely on local execution—similar to how OpenClaw handles things.
But the further we got, the more we realized that treating a user's laptop as a 24/7 background server for LLMs is an architectural anti-pattern. Two things killed it for us:
Latency: ReAct loops bouncing back and forth over home Wi-Fi ruin the UX.
Security: Running untrusted community scripts locally without absolute sandboxing is terrifying.
So we pivoted. We built a cloud sandbox using quickjs-emscripten that executes JS tools in strict isolates with 25ms cold starts. By putting the executor in the same data center as the LLM, the multi-step latency tax practically disappears. (For eventual local file access, we're building a dumb, permission-gated daemon rather than a heavy local execution engine).
I wrote down our technical reasoning on why we think the current "local-first" agent trend is structurally flawed. I'd love to hear your thoughts.
we will see about OpenClaws future, now that openAI acquired them, they may go full cloud
We're already seeing some "spin up OpenClaw VM with one click" solutions (I mean user friendly VPS wrappers).
Although I don't think OpenClaw will become cloud-native solution. Especially taken into account OpenClaw author's vision and how large the codebase become.
Not to mention such pivot would be impossible without deprecating skills (which are built specifically for current architecture)
claw spam has wrecked the claw brand imo
Not sure I’d call it spam. A cloud pivot would be a major shift in both architecture and audience, not necessarily means going closed source.
There is definitely claw spam across GitHub, HN, axvix, and social media
This is what I'm referring to