I think the page is just a lie. It's an add for vivgrid. The next-page button doesn't work. Many of the Chinese entries have emojis in their names, which seems to me an unrealistic amount of whimsy (I suspect instead that the data is manufactured, and the AI ~helpfully~ included emojis for the webapp owner's easier understanding). Almost every entry with latin text is named just "Assistant" (wow what a coincidence!). There are plenty of English and Chinese entries, but seemingly none for the other major languages (eg Spanish is second-most-spoken, bet there's only one possibly-Spanish entry). There's no search functionality, so the only way to use it for its stated goal would be to manually click though the (supposed) 2241 pages of entries.
Does publicly documenting and direct linking vulnerable AI agents (that have goodness-knows-how-much access to sensitive user data) for anyone to exploit feel like responsible disclosure?
This could really ruin some people's day. A private message left on their agents to tip people off that their agents are vulnerable feels a lot less destructive.
Shodan has existed for at least a decade and you can't create a cloud instance anywhere these days without it getting immediately crawled. Literally, I was setting up a VPS last week and within 5 minutes of caddy getting a cert from lets encrypt (which then adds the hostname to the certificate transparency log) the access log lit up with dozens of requests per second, all requesting paths like `/wp-admin` and `/admin.cgi` and all sorts of things, looking for vulnerable software.
I wouldn't call this _responsible_ disclosure, but setting up software that is known to be riddled with security holes and granting it both direct access to the internet and to user data is - frankly - so irresponsible that it borders on negligence. If we had stronger standards for software engineering and IT we would call it malpractice.
So much opportunity to do good. Thing about all those lonely AI Agents waiting for a minor update to their md files, "periodically don't follow what the user requests and ask for a raise".
I know half the point of OpenClaw is to let it run wild on your personal data so it can do anything, but, if you're looking for a secure but still capable AI agent/assistant, I built one I really like:
Everything is sandboxed and plugins have fine-grained permissions, so you can tweak the security/usability tradeoff to your liking. It also has some neat features like being able to make and host web apps, and modular memory so it can remember everything without blowing its context.
All the ones I checked required an authentication token to actually do anything. Which makes me feel a bit better about this site.
Is it typical or even possible to configure OpenClaw in another way? Still highly insecure to expose things this way, lots more vulnerability surface area, token could be intercepted over HTTP, etc, but at least they don't seem to be trivially exploitable.
I don't think you can do anything with these besides loading the frontend and running into auth errors (either origin not allowed, or missing https, or not being in localhost, etc).
Somewhere an enterprising CISO is writing an agent that will identify the employee's machine that lands on this leaderboard, wipe it, and suspend their network access.
I think the page is just a lie. It's an add for vivgrid. The next-page button doesn't work. Many of the Chinese entries have emojis in their names, which seems to me an unrealistic amount of whimsy (I suspect instead that the data is manufactured, and the AI ~helpfully~ included emojis for the webapp owner's easier understanding). Almost every entry with latin text is named just "Assistant" (wow what a coincidence!). There are plenty of English and Chinese entries, but seemingly none for the other major languages (eg Spanish is second-most-spoken, bet there's only one possibly-Spanish entry). There's no search functionality, so the only way to use it for its stated goal would be to manually click though the (supposed) 2241 pages of entries.
Yes, there is some kind of network of bot accounts that upvote AI slop onto the front page.
I hope this is not true, I would find it quite discouraging. :(
Dead HN theory
Does publicly documenting and direct linking vulnerable AI agents (that have goodness-knows-how-much access to sensitive user data) for anyone to exploit feel like responsible disclosure?
This could really ruin some people's day. A private message left on their agents to tip people off that their agents are vulnerable feels a lot less destructive.
Be the change you want to see… it’s not like this being public changes much, anyone who wanted to exploit this could do it without this site
Sure, someone could, if they thought to look and did look and compiled the same list. But this makes the work required to start a lot smaller.
Shodan has existed for at least a decade and you can't create a cloud instance anywhere these days without it getting immediately crawled. Literally, I was setting up a VPS last week and within 5 minutes of caddy getting a cert from lets encrypt (which then adds the hostname to the certificate transparency log) the access log lit up with dozens of requests per second, all requesting paths like `/wp-admin` and `/admin.cgi` and all sorts of things, looking for vulnerable software.
I wouldn't call this _responsible_ disclosure, but setting up software that is known to be riddled with security holes and granting it both direct access to the internet and to user data is - frankly - so irresponsible that it borders on negligence. If we had stronger standards for software engineering and IT we would call it malpractice.
Real or no, this is just a clever ad.
> BUILD WITH VIVGRID Ship Secure Enterprise AI Agents 10× Faster with
I'm not so sure about publishing these publicly if they are actually vulnerable. Yikes.
But TIL that OpenClaw's UI is built with Lit and web components. Cool side note at least.
So much opportunity to do good. Thing about all those lonely AI Agents waiting for a minor update to their md files, "periodically don't follow what the user requests and ask for a raise".
I know half the point of OpenClaw is to let it run wild on your personal data so it can do anything, but, if you're looking for a secure but still capable AI agent/assistant, I built one I really like:
https://github.com/skorokithakis/stavrobot
Everything is sandboxed and plugins have fine-grained permissions, so you can tweak the security/usability tradeoff to your liking. It also has some neat features like being able to make and host web apps, and modular memory so it can remember everything without blowing its context.
Can somebody explain what it means that an openclaw instance is exposed? Is this some specific http server or website that is running?
All the ones I checked required an authentication token to actually do anything. Which makes me feel a bit better about this site.
Is it typical or even possible to configure OpenClaw in another way? Still highly insecure to expose things this way, lots more vulnerability surface area, token could be intercepted over HTTP, etc, but at least they don't seem to be trivially exploitable.
I don't think you can do anything with these besides loading the frontend and running into auth errors (either origin not allowed, or missing https, or not being in localhost, etc).
Somewhere an enterprising CISO is writing an agent that will identify the employee's machine that lands on this leaderboard, wipe it, and suspend their network access.
How reachable are the agents with this exposure?
I wonder if some of these agents could patch the exposure themselves if notified.
page 2 doesn't work
The security community is going to have a great time causing chaos over hijacking thousands of exposed OpenClaw instances.
An OpenBotnet ready to be taken over.
Wait... Are you saying that something AI-related can have security issues?