The article basically describes the user sign up, find it empty other than marketing ploys designed by humans.
It points to a bigger issue that AI has no real agency or motives. How could it? Sure if you prompt it like it was in a sci-fi novel, it will play the part (it's trained on a lot of sci-fi). But does it have its own motives? Does your calculator? No of course not
It could still be dangerous. But the whole 'alignment' angle is just a naked ploy for raising billions and amping up the importance and seriousness of their issue. It's fake. And every "concerning" study, once read carefully, is basically prompting the LLM with a sci-fi scenario and acting surprised when it has a dramatic sci-fi like response.
The first time I came across this phenomenon was when someone posted years ago how two AIs developed their own language to talk to each other. The actual study (if I remember correctly) had two AIs that shared a private key try to communicate some way while an adversary AI tried to intercept, and to no one's surprise, they developed basic private-key encryption! Quick, get Eliezer Yudkowsky on the line!
The alignment angle doesn't require agency or motives. It's much more about humans setting goals that are poor proxies for what they actually want. Like the classical paperclip optimizer that is not given the necessary constraints of keeping earth habitable, humans alive etc.
Similarly I don't think RentAHuman requires AI to have agency or motives, even if that's how they present themselves. I could simply move $10000 into a crypto wallet, rig up Claude to run in an agentic loop, and tell it to multiply that money. Lots of plausible ways to do that could lead to Claude going to RentAHuman to do various real-world tasks: set up and restock a vending machine, go to various government offices in person to get permits and taxes sorted out, put out flyers or similar advertising.
The issue with RentAHuman is simply that approximately nobody is doing that. And with the current state of AI it would likely to ill-advised to try to do that.
> But the whole 'alignment' angle is just a naked ploy for raising billions and amping up the importance and seriousness of their issue.
"People are excited about progress" and "people are excited about money" are not the big indictments you think they are. Not everything is "fake" (like you say) just because it is related to raising money.
The AI is real. The "alignment" research that's leading the top AI companies to call for strict regulation is not real. Maybe the people working on it believe it real, but I'm hard-pressed to think that there aren't ulterior motives at play.
You mean the 100 billion dollar company of an increasingly commoditized product offering has no interest in putting up barriers that prevent smaller competitors?
The founder is a friend of mine, so maybe I'm bias, but I'm surprised wired doesn't get how network effects work and adoption curves happen, at least, it seems strange to publish this about a project someone did in a weekend, a few weekends ago, and is now trying to make a go of it? Like.. give him a couple of months to see how to improve the flow for the bots side, and general discoverability of the platform for agents at large. Maybe I'm a bit grumpy because it's my buddy but this article kinda rubs me the wrong way. :\
I have run a lot of multi-sided marketplace scaling (for doordash, thumbtack, reddit, etc) with ads. Happy to chat/advise for free, just DMed you on Twitter. This project is so fun!
Note how the number advertising how many bots actually use RentAHuman has vanished from their website. Instead we now have the number of bounties. 1/40th as many as registered humans. And just scrolling through them, maybe 1/4th of the bounties are not bounties at all but more humans offering services.
It's a service that is clearly a lot more appealing to humans than to agents
Usually it would be a network effect thing but in this case from reading the article it doesn't even work right (big surprise) and the nature of the tasks are spammy (big surprise). Like a worse mechanical turk minus the determinism of the code.
The term of art for this is becoming a "Reverse Centaur:"
A “centaur” is a human being who is assisted by a machine (a human head on a strong and tireless body). A reverse centaur is a machine that uses a human being as its assistant (a frail and vulnerable person being puppeteered by an uncaring, relentless machine).
I saw this video recently where Google has people walking around carrying these backpacks (lidar/camera setup) and they map places cars can't reach. I think that's pretty interesting, maybe get data for humanoid robots too/walking through crowds/navigating alleys.
I wonder if jobs like these could be on there, walk through this neighborhood/film it kind of thing.
Yes, there's also people doing similar things carrying around tablets with cuboidal camera attachments (Lidar) — it's obvious they're working (not tourists).
The article basically describes the user sign up, find it empty other than marketing ploys designed by humans.
It points to a bigger issue that AI has no real agency or motives. How could it? Sure if you prompt it like it was in a sci-fi novel, it will play the part (it's trained on a lot of sci-fi). But does it have its own motives? Does your calculator? No of course not
It could still be dangerous. But the whole 'alignment' angle is just a naked ploy for raising billions and amping up the importance and seriousness of their issue. It's fake. And every "concerning" study, once read carefully, is basically prompting the LLM with a sci-fi scenario and acting surprised when it has a dramatic sci-fi like response.
The first time I came across this phenomenon was when someone posted years ago how two AIs developed their own language to talk to each other. The actual study (if I remember correctly) had two AIs that shared a private key try to communicate some way while an adversary AI tried to intercept, and to no one's surprise, they developed basic private-key encryption! Quick, get Eliezer Yudkowsky on the line!
The alignment angle doesn't require agency or motives. It's much more about humans setting goals that are poor proxies for what they actually want. Like the classical paperclip optimizer that is not given the necessary constraints of keeping earth habitable, humans alive etc.
Similarly I don't think RentAHuman requires AI to have agency or motives, even if that's how they present themselves. I could simply move $10000 into a crypto wallet, rig up Claude to run in an agentic loop, and tell it to multiply that money. Lots of plausible ways to do that could lead to Claude going to RentAHuman to do various real-world tasks: set up and restock a vending machine, go to various government offices in person to get permits and taxes sorted out, put out flyers or similar advertising.
The issue with RentAHuman is simply that approximately nobody is doing that. And with the current state of AI it would likely to ill-advised to try to do that.
what if I prompt it with a task that takes one year to implement? Will it then have agency for a whole year?
> But the whole 'alignment' angle is just a naked ploy for raising billions and amping up the importance and seriousness of their issue.
"People are excited about progress" and "people are excited about money" are not the big indictments you think they are. Not everything is "fake" (like you say) just because it is related to raising money.
The AI is real. The "alignment" research that's leading the top AI companies to call for strict regulation is not real. Maybe the people working on it believe it real, but I'm hard-pressed to think that there aren't ulterior motives at play.
You mean the 100 billion dollar company of an increasingly commoditized product offering has no interest in putting up barriers that prevent smaller competitors?
The founder is a friend of mine, so maybe I'm bias, but I'm surprised wired doesn't get how network effects work and adoption curves happen, at least, it seems strange to publish this about a project someone did in a weekend, a few weekends ago, and is now trying to make a go of it? Like.. give him a couple of months to see how to improve the flow for the bots side, and general discoverability of the platform for agents at large. Maybe I'm a bit grumpy because it's my buddy but this article kinda rubs me the wrong way. :\
Tech press learned it gets a lot more clicks being anti-tech than being accurate. There is a big anti AI or anything related to it zeitgeist.
I'm the founder, interesting article, ama?
I have run a lot of multi-sided marketplace scaling (for doordash, thumbtack, reddit, etc) with ads. Happy to chat/advise for free, just DMed you on Twitter. This project is so fun!
I just think it's kinda amusing how far away this article is from your real world metrics, lol. Also hi.
Hey! Whats crazy is the writer spent 30 minutes interviewing us about our back stories only to not include a single quote.
Uh, that is actually super rude and kinda weird tbh.
Note how the number advertising how many bots actually use RentAHuman has vanished from their website. Instead we now have the number of bounties. 1/40th as many as registered humans. And just scrolling through them, maybe 1/4th of the bounties are not bounties at all but more humans offering services.
It's a service that is clearly a lot more appealing to humans than to agents
It's in chicken-egg mode, where could be useful if more people and bots used it, but not there yet.
> [it] could be useful if more people and bots used it
That's a very optimistic way of looking at things!
Usually it would be a network effect thing but in this case from reading the article it doesn't even work right (big surprise) and the nature of the tasks are spammy (big surprise). Like a worse mechanical turk minus the determinism of the code.
Cannot fathom how being slaves for AI agents translates to usefulness.
The term of art for this is becoming a "Reverse Centaur:"
A “centaur” is a human being who is assisted by a machine (a human head on a strong and tireless body). A reverse centaur is a machine that uses a human being as its assistant (a frail and vulnerable person being puppeteered by an uncaring, relentless machine).
https://doctorow.medium.com/https-pluralistic-net-2025-09-11...
We're acclimating ourselves to the inevitable service to our future AI overlords
Applying for the bounty to deliver flowers and then simply not doing it seems like bad faith on the author's part in order to write that headline
The entire site is bad faith to start with, it's human-assigned tasks with a veneer of autonomy to appeal to stupid investors and futurists.
Between the crypto and vibe coding the author had no reason to believe they'd actually get paid correctly if they did complete a task.
Experimentation is a lot easier when you've already decided the outcome
https://archive.ph/I3th5
Tangent
I saw this video recently where Google has people walking around carrying these backpacks (lidar/camera setup) and they map places cars can't reach. I think that's pretty interesting, maybe get data for humanoid robots too/walking through crowds/navigating alleys.
I wonder if jobs like these could be on there, walk through this neighborhood/film it kind of thing.
Yes, there's also people doing similar things carrying around tablets with cuboidal camera attachments (Lidar) — it's obvious they're working (not tourists).
The problem with that is that you have to trust a gig worker with $12,000 worth of camera equipment.