Personal AI Is Here (and You're Probably Not Ready)

(robert-glaser.de)

5 points | by todsacerdoti 2 months ago ago

2 comments

  • g-b-r 2 months ago

    It's astonishing and horrifying that he didn't even mention privacy.

    I thought we couldn't reach a lower point than what happened with social networks, but here we are.

    We need strong laws about this.

    • youngbrioche 2 months ago

      Author here! Fair point that I didn't dedicate a full section to privacy as a standalone topic. But "didn't even mention" isn't accurate either.

      The entire "Why This Is Dangerous" section is about the security and data exposure implications of running a personal agent. I discuss the Lethal Trifecta (private data access + untrusted content + external actions), explain why I killed email integration after one experiment, and walk through every mitigation I run — network isolation, sandboxing, egress firewalls, approve-only mode. I also explicitly note that right now, you have to rent frontier intelligence via API — your data leaves your machine for inference. Local models simply aren't good enough yet for this kind of agentic work, and they're significantly more susceptible to prompt injection on top of that. That's a real privacy trade-off I'm making consciously, not ignoring.

      The post's angle is deliberately "here's what's possible and here's what's dangerous" rather than "here's why you shouldn't do this." I think we need both perspectives. But I'd push back on the framing that this is worse than social networks — with OpenClaw, your data at rest is Markdown on your own hardware, version-controlled in Git, deletable, portable. That's a fundamentally different posture than handing everything to a platform whose business model is monetizing your attention and data.