Living Dangerously with Claude

(simonwillison.net)

134 points | by FromTheArchives a day ago ago

62 comments

  • ZeroConcerns 3 hours ago

    So, yeah, only tangentially related, but if anyone at Anthropic would see it fit to let Claude loose on their DNS, maybe they can create an MX record for 'email.claude.com'?

    That would mean that their, undoubtedly extremely interesting, emails actually get met with more than a "450 4.1.8 Unable to find valid MX record for sender domain" rejection.

    I'm sure this is just an oversight being caused by obsolete carbon lifeforms still being in charge of parts of their infrastructure, but still...

  • almosthere 5 hours ago

    Anyone from the Cursor world already YOLO's it by default.

    A massive productivity boost I get is using to do server maintenance.

    Using gcloud compute ssh, log into all gh runners and run docker system prune, in parellel for speed and give me a summary report of the disk usage after.

    This is an undocumented and underused feature of basic agentic abilities. It doesn't have to JUST write code.

    • wrs 3 hours ago

      Yesterday I was trying to move a backend system to a new AWS account and it wasn’t working. I asked Claude Code to figure it out. About 15 minutes and 40 aws CLI commands later, it did! Turned out the API Gateway’s VPCLink needed a security group added, because the old account’s VPC had a default egress rule and the new one’s didn’t.

      I barely understand what I just said, and I’m sure it would have taken me a whole day to track this down myself.

      Obviously I did NOT turn on auto-approve for the aws command during this process! But now I’m making a restricted role for CC to use in this situation, because I feel like I’ll certainly be doing something like this again. It’s like the AWS Q button, except it actually works.

    • manmal 3 hours ago
    • normie3000 4 hours ago

      Is this what ansible does? Or some other classic ops tool?

    • simonw 5 hours ago

      Does Cursor have a good sandboxing story?

      • tuhgdetzhh 4 hours ago

        I run multiple instances of cursor cli yolo in a 4 x 3 tmux grid each in an isolated docker container. That is a pretty effective setup.

    • mandevil 4 hours ago

      There are a million different tools that are designed to do this, e.g. this task (log into a bunch of machines and execute a specific command without any additional tools running on each node) is literally the design use case for Ansible. It would be a simple playbook, why are you bringing AI into this at all?

      • giobox 3 hours ago

        Agreed, this is truly bizarre to me. Is OP not going to have to do this work all over again in x days time once the nodes fill with stale docker assets again?

        AI can still be helpful here if new to scheduling a simple shell command, but I'd be asking the AI how do I automate the task away, not manually asking the AI to do the thing every time, or using my runners in a fashion that means I don't have to even concern myself with scheduled prune command calls.

        • almosthere 2 hours ago

          No, we have a team dedicated to fixing this long term, but this allowed 20 engineers to get working right away. Long term fix is now in.

          • giobox an hour ago

            If a team of 20 engineers got blocked because you/the team didn't run docker prune, you arguably have even bigger problems...

        • bdangubic 3 hours ago

          > but I'd be asking the AI how do I automate the task away

          AI said “I got this” :)

      • ericmcer 2 hours ago

        Yeah that sounds like a CI/CD task or scheduled job. I would not want the AI to "rewrite" the scripts before running them. I can't really think of why I would want it to?

      • almosthere 2 hours ago

        Because I didn't have to do anything other than write that english statement and it worked. Saved me a long time.

        • mandevil 35 minutes ago

          I'm glad this worked for you, but if it were me at most I would have asked Claude Code to write me an Ansible playbook for doing this, then run it myself. That gives me more flexibility to run this in the future, to change the commands, to try it, see that it fails, and do it again, etc.

          And I honestly am a little concerned about a private key for a major cloud account where Claude can use it, just because I'm more than a little paranoid about certs.

  • matthewdgreen 21 hours ago

    So let me get this straight. You’re writing tens of thousands of lines of code that will presumably go into a public GitHub repository and/or be served from some location. Even if it only runs locally on your own machine, at some point you’ll presumably give that code network access. And that code is being developed (without much review) by an agent that, in our threat model, has been fully subverted by prompt injection?

    Sandboxing the agent hardly seems like a sufficient defense here.

    • daxfohl 2 hours ago

      That's kind of tangential though. The article is more about using sandboxes to allow `--dangerously-skip-permissions` mode. If you're not looking at the generated code, you're correct, sandboxing doesn't help, but neither does permissioning, so it's not directly relevant to the main point.

    • tptacek 5 hours ago

      Where did "without much review" come from? I don't see that in the deck.

      • enraged_camel 5 hours ago

        Yeah. Personally I haven't found a workflow that relies heavily on detailed design specs, red/green TDD followed by code review. And that's fine because that's how I did my work before AI anyway, both at the individual level and at the team level. So really, this is no different than reviewing someone else's PR, aside from the (greatly increased) turnaround time and volume.

        • tyre 4 hours ago

          I’ve found it helpful to have a model write a detailed architecture and implementation proposal, which I then review and iterate on.

          From there it splits out each phase into three parts: implementation, code review, and iteration.

          After each part, I do a code review and iteration.

          If asked, the proposal is broken down into small, logical chunks so code review is pretty quick. It can only stray so far off track.

          I treat it like a strong mid-level engineer who is learning to ship iteratively.

          • theshrike79 4 hours ago

            I play Claude and Codex against each other

            Codex is pretty good at finding complex bugs in the code, but Claude is better at getting stuff working

    • simonw 20 hours ago

      What is your worst case scenario from this?

      • noitpmeder 8 hours ago

        Bank accounts drained, ransomware installed, ...

      • deadbabe 4 hours ago

        Silently setup a child pornographer exchange server and run it on your machine for years without you ever noticing until you are caught and imprisoned.

  • mike_hearn 4 hours ago

    sandbox-exec isn't really deprecated. It's just a tiny wrapper around some semi-private undocumented APIs, it says that because it's not intended for public use. If it were actually deprecated Apple would have deleted it at some point, or using it would trigger a GUI warning, or it'd require a restricted entitlement.

    The reason they don't do that is because some popular and necessary apps use it. Like Chrome.

    However, I tried this approach too and it's the wrong way to go IMHO, quite beyond the use of undocumented APIs. What you actually want to do is virtualize, not sandbox.

    • krackers 2 hours ago

      Fun fact: the sandboxing rules are defined using scheme!

  • stuaxo 11 hours ago

    I've been thinking about this a bit.

    I reckon something lie Qubes could work fairly well.

    Create a new Qube and have control over network connectivity, and do everything there, at the end copy the work out and destroy it.

  • zxilly 3 hours ago

    I should like to know how much this would cost? Even Claude's largest subscription appears insufficient for such token requirements.

  • jampa 4 hours ago

    I don't understand why people advocate so strongly for `--dangerously-skip-permissions`.

    Setting up "permissions.allow" in `.claude/settings.local.json` takes minimal time. Claude even lets you configure this while approving code, and you can use wildcards like "Bash(timeout:*)". This is far safer than risking disasters like dropping a staging database or deleting all unstaged code, which Claude would do last week, if I were running it in the YOLO mode.

    The worst part is seeing READMEs in popular GitHub repos telling people to run YOLO mode without explaining the tradeoffs. They just say, "Run with these parameters, and you're all good, bruh," without any warning about the risks.

    I wish they could change the parameter to signify how scary it can be, just like React did with React.__SECRET_INTERNALS_DO_NOT_USE_OR_YOU_WILL_BE_FIRED (https://github.com/reactjs/react.dev/issues/3896)

    • bdangubic 3 hours ago

      changing the parameter name to something scary will only increase its usage

    • dist-epoch 3 hours ago

      I tried this path. The issue is that agents are very creating in coming up with new variations. "uv run pytest", "python3 -m pytest", "bash -c pytest"

      It's a never ending game of whitelisting.

  • lacker a day ago

    The sandbox idea seems nice, it's just a question of how annoying it is in practice. For example the "Claude Code on the web" sandbox appears to prevent you from loading `https://api.github.com/repos/.../releases/latest`. Presumably that's to prevent you from doing dangerous GitHub API operations with escalated privileges, which is good, but it's currently breaking some of my setup scripts....

    • simonw a day ago

      Is that with their default environment?

      I have been running a bunch of stuff in there with a custom environment that allows "*"

      • lacker 2 hours ago

        I whitelisted github.com, api.github.com, *.github.com, and it still doesn't seem to work. I suspect they did something specifically for github to prevent the agent from doing dangerous things with your credentials? But I could be wrong.

  • igor47 a day ago

    My approach is to ask Claude to plan anything beyond a trivial change and I review the plan, then let it run unsupervised to execute the plan. But I guess this does still leave me vulnerable to prompt injection if part of the plan is accessing external content

    • abathologist 3 hours ago

      What guarantees do you have it will actually follow the stated plan instead of doing something else entirely?

    • ares623 5 hours ago

      Just don’t think about it too much. You’ll be fine.

  • boredtofears 5 hours ago

    I like the best of both worlds approach of asking Claude to refine a spec with me (specifically instructing it to ask me questions) and then summarize an implementation or design plan (this might be a two step process if the feature is big enough)

    When I’m satisfied with the spec, I turn on “allow all edits” mode and just come back later to review the diff at the end.

    I find this works a lot better than hoping I can one shot my original prompt or having to babysit the implementation the whole way.

    • wahnfrieden 5 hours ago

      I recommend trying a more capable model that will read much more context too when creating specs. You can load a lot of full files into GPT 5 Pro and have it produce a great spec and give more surgical direction to CC or Codex (which don’t read full files and often skip over important info in their haste). If you have it provide the relevant context for the agent, the agent doesn’t waste tokens gathering it itself and will proceed to its work.

      • boredtofears 4 hours ago

        Is there an easy way to get a whole codebase into GPT 5 Pro? It's nice with claude to be able to say "examine the current project in the working directory" although maybe that's actually doing less than I think it is.

        • simonw 4 hours ago

          I wrote a tool for that: https://github.com/simonw/files-to-prompt - and there are other similar tools like repomix.

          These days I often use https://gitingest.com - it can grab any full repo on GitHub has something you can copy and paste, e.g. https://gitingest.com/simonw/llm

          • dist-epoch 3 hours ago

            I wrote a similiar tool myself, mostly because your tool or repomix doesn't support "presets" (saved settings):

                [client]
                root = "~/repo/client"
                include = [
                    "src/**/*.ts",
                    "src/**/*.vue",
                    "package.json",
                    "tsconfig*.json",
                    "*.ts",
                ]
                exclude = [
                    "src/types/*",
                    "src/scss/*",
                ]
                output = "bundle-client.txt"
            
                $ bundle -p client
            
            What do you do when you repeatedly need to bundle the same thing? Bash history?
          • boredtofears 4 hours ago

            Of course you did - thanks, huge fan!

  • danielbln a day ago
    • js2 a day ago

      It's discussed in the linked post.

  • catigula 19 hours ago

    Telling Claude to solve a problem and walking away isn't a problem you solved. You weren't in the loop. You didn't complete any side quests or do anything of note, you merely watched an AGI work.

    • _factor 5 hours ago

      Writing your Java code on an IDE, you just sat by while the interpreter did all the work on the generated byte code and corresponding assembly.

      You merely watched the tools do the work.

      • bitpush 5 hours ago

        This exactly is the part that lots of folks are missing. As programmers in a high level language (C, Rust, Python ..) we were merely guiding the compiler to create code. You could say the compiler/interpreter is more deterministic, but the fact remains the code that is run is 100% not what you wrote, and you're at the mercy of the tool .. which we trust.

        Compiled output can change between versions, heck, can even change during runtime (JIT compilation).

        • catigula 5 hours ago

          The hubris here, which is very short-sighted, is the idea that a. You have very important contributions to make and b. You cannot possibly be replaced.

          If you're barely doing anything neither of these things can possibly be true even with current technology.

      • catigula 5 hours ago

        This is a failure of analogy. Artificial intelligence isn't a normal technology.

    • simonw 16 hours ago
    • bdangubic 2 hours ago

      exactly. the problem did get solved though which is the whole point :)

    • wahnfrieden 5 hours ago

      Who cares? I don’t see any issue. I write code to put software into users hands, not because I like to write code.

      • catigula 5 hours ago

        You don't see any issue with the I in this equation falling out of relevance?

        Not even a scrap of self-preservation?

        • ares623 5 hours ago

          I live for shareholder value.

          • wahnfrieden 4 hours ago

            It feels great to when I’m the only shareholder

        • wahnfrieden 5 hours ago

          Since I ended my career as a wage worker and just sell my own software now, automation is great for me. Even before GPT hype I saw the writing on the wall for relying on a salary and got out so that I could own the value of my labor.

          I don’t see my customers being able to one-shot their way to the full package of what I provide them anytime soon either. As they gain that capability, I also gain the capability to accelerate what more value I provide them.

          I don’t think automation is the cause of your inability to feed and house yourself if it reduces the labor needed by capital. That’s a social and political issue.

          Edit: I have competitors already cloning them with CC regularly, and they spend more than 24h dedicated to it too

          If the capability does arrive, that’s why I’m using what I can today to get a bag before it’s too late.

          I can’t stop development of automation. But I can help workers organize, that’s more practical.

          • catigula 4 hours ago

            >I don’t see my customers being able to one-shot their way to the full package of what I provide them anytime soon either

            What if they are, or worse? Are you prepared for that?

            If you point me towards your products, someone can try to replicate them in 24 hours. Sound good?

            Edit: I found it, but your website is broken on mobile. Needs work before it's ready to be put into the replication machine. If you'd like I can do this for you for a small fee at my consulting rate (wink emoji).

            • dist-epoch 3 hours ago

              > someone can try to replicate them in 24 hours.

              All the more reason to not hand-code it in a week.

        • dist-epoch 3 hours ago

          Do you think a programmer not using AI will stop it's march forward?

          • catigula an hour ago

            If more people see the cows 4 beef analogy we gain more votes against it.