AI CEO – Replace your boss before they replace you

(replaceyourboss.ai)

286 points | by _tk_ 3 hours ago ago

109 comments

  • briandw 42 minutes ago

    Ok so clearly a satire. However I kinda want this. They make some really good points about how an AI would be better than many CEOs. Honestly some of the companies I've worked for would be better with Gemini at in charge. Yes humanity is doomed, but at least I would understand the motivations and we'd have less CEO ADHD moments. (CEO ADHD -> "Some other CEO told me about X, why aren't we doing X")

    • arbuge 5 minutes ago

      I'd be concerned about all the CEO's reports prompt injecting the boss though.

      Give me a raise so I can buy her medicine, or my grandma dies...

    • caminanteblanco 20 minutes ago

      I feel like if I mention technology X in my system context for Gemini, there is a 100% chance that when I ask for hiking recommendations Gemini will say "As a user of technology X, you would appreciate the beauty and elegance of the Cuyamaca National Forest"

  • chasing0entropy 2 hours ago

    Can you design an AI agent that I own, to replace me? This is what the market really wants and is probably one of the ONLY things that doesn't exist.

    Just let me subscribe to an agent to do my work while I keep getting a paycheck.

    • georgehotz 2 hours ago

      Who's giving you that paycheck? Why don't they just hire that AI agent themselves and cut out the middle man?

      • Mtinie 2 hours ago

        In this scenario the person who wants to be paid owns the output of the agent. So it’s closer to a contractor and subcontractor arrangement than employment.

        • georgehotz an hour ago

          How do they own it? I see two scenarios.

          1. They built the agent and it's somehow competitive. If so, they shouldn't just replace their own job with it, they should replace a lot more jobs and get a lot more rich than one salary.

          2. They rent the agent. If so, why would the renting company not rent directly to their boss, maybe even at a business premium?

          I see no scenario where there's an "agent to do my work while I keep getting a paycheck."

          • dr_dshiv an hour ago

            If you know contracting, you know that’s exactly how it’s always worked.

          • oarsinsync an hour ago

            It's the equivalent of outsourcing your job. People have done this before, to China, to India, etc. There are stories about the people that got caught, e.g. with China because of security concerns, and with India because they got greedy, were overemployed, and failed in their opsec.

            This is no different, it's just a different mechanism of outsourcing your job.

            And yes, if you can find a way to get AI to do 90% of your job for you, you should totally get 4 more jobs and 5x your earnings for 50% reduction in hours spent working.

            • georgehotz an hour ago

              Maybe a few people managed to outsource their own job and sit in the middle for a bit. But that's not the common story, the common story is that your employer cut out the middle man and outsourced all the jobs. The same thing will happen here.

          • EGreg 15 minutes ago

            Let me generalize

            The problem is the organizing principle for our entire global society is competition.

            This is the default, the law of the jungle or tribal warfare. But within families or corporations we do have cooperation, or a command structure.

            The problem is that this principle inevitably leads to the tragedy of the unmanaged commons. This is why we are overfishing, polluting the Earth, why some people are freeriding and having 7 children with no contraception etc. Why ecosystems — rainforests, kelp forests, coral reefs, and even insects — are being decimated. Why one third of arable farmland is desertified, just like in the US dust bowl. Back then it was a race to the bottom and the US Govt had to step in and pay farmers NOT to plant.

            We are racing to an AIpocalypse because what if China does it first?

            In case you think the world don’t have real solutions… actually there have been a few examples of us cooperating to prevent catastrophe.

            1. Banning CFCs in Montreal Protocol, repairing hole in Ozone Layer

            2. Nuclear non-proliferation treaty

            3. Ban on chemical weapons

            4. Ban on viral bioweapons research

            So number 2 is what I would hope would happen with huge GPU farms, we as a global community know exactly the supply chains, heck there is only one company in Europe doing the etching.

            And also I would want a global ban on AGI development, or at least of leaking model weights. Otherwise it is almost exactly like giving everyone the means to make chemical weapons, designer viruses etc. The probability that NO ONE does anything that gets out of hand, will be infinitesimally small. The probability that we will be overrun by tons of destructive bot swarms and robots is practically 100%.

            In short — this is the ultimate negstive externality. The corporations and countries are in a race to outdo each other in AGI even if they destroy humanity doing it. All because as a species, we are drawn to competition and don’t do the work to establish frameworks for cooperation the way we have done on local scales like cities.

            PS: meanwhile, having limited tools and not AGI or ASI can be very helpful. Like protein folding or chess playing. But why, why have AGI proliferate?

      • OtherShrezzing an hour ago

        The AI agents don’t appear to know how & where to be economically productive. That still appears to be a uniquely human domain of expertise.

        So the human is there to decide which job is economically productive to take on. The AI is there to execute the day-to-day tasks involved in the job.

        It’s symbiotic. The human doesn’t labour unnecessarily. The AI has some avenue of productive output & revenue generating opportunity for OpenAI/Anthropic/whoever.

      • danenania 2 hours ago

        A question is which side agents will achieve human-level skill at first. It wouldn’t surprise me if doing the work itself end-to-end (to a market-ready standard) remains in the uncanny valley for quite some time, while “fuzzier” roles like management can be more readily replaced.

        It’s like how we all once thought blue collar work would be first, but it turned out that knowledge work is much easier. Right now everyone imagines managers replacing their employees with AI, but we might have the order reversed.

        • jMyles an hour ago

          > This begs the question of which side agents will achieve human-level skill at first.

          I don't agree; it's perfectly possible, given chasing0entropy's... let's say 'feature request', that either side might gain that skill level first.

          > It wouldn’t surprise me if doing the work itself end-to-end (to a market-ready standard) remains in the uncanny valley for quite some time, while “fuzzier” roles like management can be more readily replaced.

          Agreed - and for many of us, that's exactly what seems to be happening. My agent is vaguely closer to the role that a good manager has played for me in the past than it is to the role I myself have played - it keeps better TODO lists than I can, that's for sure. :-)

          > It’s like how we all once thought blue collar work would be first, but it turned out that knowledge work is much easier. Right now everyone imagines managers replacing their employees with AI, but we might have the order reversed.

          Perfectly stated IMO.

      • zwnow 2 hours ago

        How are businesses going to get money if there are no humans that are able to pay for goods?

        Lots of us are not cut out for blue collar work.

        • lurk2 an hour ago

          > How are businesses going to get money if there are no humans that are able to pay for goods?

          By transacting with other businesses. In theory comparative advantage will always ensure that some degree of trade takes place between completely automated enterprises and comparatively inefficient human labor; in practice the utility an AI could derive from these transactions might not be worth it for either party—the AI because the utility is so minimal, and the humans because the transactions cannot sustain their needs. This gets even more fraught if we assume an AGI takes control before cheaply available space flight, because at a certain point having insufficiently productive humans living on any area of sea or land becomes less efficient than replacing the humans with automatons (particularly when you account for the risk of their behaving in unexpected ways).

        • vbezhenar an hour ago

          Some humans will be rich and they'll buy things. For example those humans who own AI or fabs. And those humans, who serve to them (assuming that there will be services not replaced by AI, for example prostitution), will also buy things.

          If 99.99% of other humans will become poor and eventually die, it certainly will change economy a lot.

          • ares623 an hour ago

            That’s assuming a large chunk of humanity will just lay down and die off.

          • coliveira an hour ago

            The 99% of humans will band together and destroy the ones who "own" things.

        • Joker_vD an hour ago

          There is an amount of people who own, well, in the past we could say "means of production" but let's not. So, they own the physical capital and AI worker-robots, and this combination produces various goods for human use. So they (the people who own that stuff) trade those goods between each other since nobody owns the full range of production chains.

          The people who used to be hired workers? Eh, they still own their ability to work (which is now completely useless in the market economy) and nothing much more so... well, they can go and sleep under the bridge or go extinct or do whatever else peacefully, as long as they don't try to trespass on the private property, sanctity and inviolability of which is obviously crucial for the societal harmony.

          So yeah, the global population would probably shrink down to something in the hundreds millions or so in the end, and ironically, the economy may very well end up being self-sustainable and environmentally green and all that nice stuff since it won't have to support the life standards of ~10 billions, although the process of getting there could be quite tumultous.

          • coliveira 43 minutes ago

            Thanos at least planed to destroy only 1/2 of the population. This is already beyond dystopian.

          • zwnow an hour ago

            This is disgusting to read, not going to lie. Hopefully the workers just lynch the people who enriched themselves on other peoples work.

        • macintux an hour ago

          As long as someone else is still paying their employees, it’s all good.

      • fijiaarone 2 hours ago

        Can you explain why we pay Sam Altman & Elon Musk? Or Jeff Bezos & Bill Gates? They’re just middlemen collecting money for other people’s labor.

        • georgehotz an hour ago

          You are welcome to try to cut them out and start your own business. But I suspect you might find it a bit harder than your employer signing up for a SaaS AI agent. Actually wait, isn't that what this website is? Does it work?

        • coliveira 42 minutes ago

          Scam Altman and Musk are paid to manipulate stock markets and enrich themselves and their friends.

        • gridspy an hour ago

          They are a bridge between those with money and those with skill. Plus they can aggregate information and act as a repository of knowledge and decision maker for their teams.

          These are valuable skills, though perhaps nowhere near as valuable as they end up being in a free market.

          • gausswho 34 minutes ago

            Sounds like skills that bots already do better than humans.

          • nawgz an hour ago

            A mistake lies in thinking it’s a market, but it’s egregious you’d call it free

        • dboreham an hour ago

          This is backwards. Those people got into the positions they have by having money to spend, not because someone wanted to pay them to do something. (Or they had a way to have control over spending someone else's money.)

          • georgehotz an hour ago

            Do people on Hacker News actually believe this? Each one of the four people named built a product I happily pay for! Then they used investment and profits to hire people to build more products and better products.

            There's a lot of scammers in the world, but OpenAI, Tesla, Amazon, and Microsoft have mostly made my life better. It's not about having money, look at all the startups that have raised billions and gone kaput. Vs say Amazon who raised just $9M before their $54M IPO and is still around today bringing tons of stuff to my door.

            • coliveira 39 minutes ago

              The most successful scammers will provide you with something of value and then act to swindle you and many others of multiple times the amount of "value" they're generating. With Musk and their friends it seems to be the pattern.

              • georgehotz 30 minutes ago

                Musk sells several things. Electric cars for $40k-$100k. Satellite internet for $40-$120 per month. X/Grok premium for $8/mo. And space launch services for about $2,500 per kg. Which one(s) of these are the scam? Prices seem decent to me, but if you tell me where I can get cheaper and better I'm open to it.

                • hnjobsearch 12 minutes ago

                  The "scam" part of Tesla has been well-documented, from their failure to deliver reliable full self-driving to the Cybertruck's low quality manufacturing, there is a lot of information out there about it.

                  I can't comment on the other things.

                  • georgehotz 4 minutes ago

                    comma.ai owns a lot of cars, including a Tesla, so I have tried most cars in the price range. Tesla is certainly no more of a scam than the other cars, and compared to say, the Chevy Bolt, it's a lot better. Can you suggest a better car for the value? Is there another car I can buy with better full self driving?

    • crackalamoo 2 hours ago

      Isn't this kind of the same as an AI copilot, just with higher autonomy?

      I think the limiting factor is that the AI still isn't good enough to be fully autonomous, so it needs your input. That's why it's still in copilot form

    • ErroneousBosh 38 minutes ago

      > Just let me subscribe to an agent to do my work while I keep getting a paycheck.

      I've already done this. It's just a Teams bot that responds to messages with:

      "Yeah that looks okay, but it should probably be a database rather than an Excel spreadsheet. Have you run it past the dev team? If you need anything else just raise a ticket and get Helpdesk to tag me in it"

      "I'm pretty sure you'll be fine with that, but check with {{ senior_manager }} first, and if you need further support just raise a ticket and Helpdesk will pass it over"

      "Yes, quite so, and indeed if you refer to my previous email from about six months ago you'll see I mentioned that at the time"

      "Okay, you should be good to go. Just remember, we have Change Management Process for a reason so the next time try to raise a CR so one of us can review it, before anyone touches anything"

      and then

      "If you've any further questions please stick them in an email and I'll look at it as a priority.

      Mòran taing,

      EB."

      (notice that I don't say how high a priority?)

      No AI needed. Just good old-fashioned scripting, and organic stupidity.

      • xtracto 25 minutes ago

        Reminded me of an episode of the IT Crowd where they put a recording of "Have you tried turning it off and on again? as the answering machine for an IT department.

    • globular-toast an hour ago

      What would you actually do if you got that? I like watching movies and playing games, but that lifestyle quickly leads to depression. I like travelling too, but imagine if everyone could do it all the time. There's only so many good places.

      • Teever 4 minutes ago

        I would use the AI to build a robot that could build copies of itself and then once there are a sufficient number of robots I'd use them to build more good places to go to.

    • cyanydeez an hour ago

      not unless you can afford your own super cluster. Otherwise, the AI you use will own you.

    • IshKebab 2 hours ago

      Why would the market want that? Don't be stupid.

      • geoffmanning an hour ago

        The world doesn't want assholes either but here we are

    • -_- 2 hours ago

      That's the premise behind Workshop Labs! https://workshoplabs.ai

  • candiddevmike 2 hours ago

    Really this is the only 10x part of GenAI that I see: increasing the number of reports exponentially by removing managers/directors, and using GenAI (search/summarization, e.g. "how is X progressing" etc) to understand what's going on underneath you. Get rid of the political game of telephone and get leaders closer to the ground floor (and the real problems/blockers).

    • dboreham an hour ago

      Also replaces lawyers.

      • IgorPartola an hour ago

        From what I hear, this will not happen. AI keeps absolutely making up laws and cases that don’t exist no matter what you feed it. Basically anything legal written or partially written by AI is a liability. IANAL but have been reading a tiny bit about it.

        • Xunjin 16 minutes ago

          Worth to note that lawyer is not only read a text and say: "true or false" requires interpretation and understanding of how a society changes/evolves, depending on the country it's jurisprudence or more analytical (written laws).

          I have a difficult in see why a portion of HN audience is so "narrowed view" about justice systems and politics.

        • cheema33 27 minutes ago

          The need for lawyers will shrink and is shrinking. My company used to call lawyers for many small little things. Now it is easy to ask an LLM and have the second LLM verify it. For super critical things, we may still call lawyers. And in the court rooms, you will still see lawyers. But everywhere else the need for lawyers will keep going down.

          • insin 17 minutes ago

            > Now it is easy to ask an LLM and have the second LLM verify it

            I genuinely can't tell when people are and aren't being serious any more.

        • Jabrov an hour ago

          Ehhh just calling a raw LLM is not going to replace anyone and be prone to hallucination, sure. But lawyers are increasingly using LLM systems, and there's law-specific products that are heavily grounded (ie. they can only respond from source material).

  • Barathkanna 20 minutes ago

    The site is obviously satire, but the interesting part is the growth tactic behind it. oilwell.app is using a meme page as a distribution engine instead of a standard marketing site.

    In a crowded AI tooling market, this kind of contrast joke on the front paired with a real product behind it, cuts through noise in a way a normal landing page wouldn’t. People mock the gimmick, but the gimmick is doing exactly what it’s designed to do, get everyone talking.

  • hbarka an hour ago

    Our CEO did not write a customary Thanksgiving email. There was nothing from other C-level leadership. I’ve been around long enough to see this erosion in company culture custom. What is happening? Perhaps an AI CEO would have these subtleties.

  • jondwillis 2 hours ago

    I love that they’re all called David except for Simon

    • coliveira 37 minutes ago

      This is a common thing among their mafia.

  • keiferski 2 hours ago

    This looks like the perfect counterpart to Boss as a Service:

    https://bossasaservice.com/

  • coffeecoders 2 hours ago

    How hard would it be to run a simulator with multiple LLMs. Say, one as the boss and a few as employees. Just let them talk, coordinate, and "work"? Could be the fastest way to test what actually happens when you try to automate management.

    • ai-christianson 2 hours ago

      This is quite literally what we've built @ Gobii, but it's prod ready and scalable.

      The idea is you spin up a team of agents, they're always on, they can talk to one another, and you and your team can interact with them via email, sms, slack, discord, etc.

      Disclaimer: founder

      • jayd16 an hour ago

        Can I get this in an ant-farm mode where I can see them doddle around a cube-farm office?

      • coffeecoders an hour ago

        Interesting approach, but I mean more in the sense of a multi-agent sandbox than workflow automation. Your project feels like wrapping a bunch of LLMs into "agents" with fixed cadences, it is a neat product idea, even if it mostly ends up orchestrating API calls and cron jobs.

        The thing I’m curious about is the emergent behavior, letting multiple LLMs interact freely in a simulated organization to see how coordination, bottlenecks, and miscommunication naturally arise.

        Cool project regardless!

        • ai-christianson 18 minutes ago

          Agreed, the emergent behavior is the most interesting and valuable part. We don't want bad emergent behavior (agents going rogue) but we do want the good kind (solving problems in unexpected ways.)

      • krater23 2 hours ago

        And they simulate a externalized team where the enterprize that pays the team doesn't knows that it's just AI and just thinks that these chinese/indian/african people of this external team are really bad at what they are doing.

    • PhilippGille an hour ago

      Multiple projects for autonomous multi agent teams already exist.

    • fragmede 2 hours ago
    • fijiaarone an hour ago

      Left to their own devices, the LLMs would probably design a pocket watch.

  • didibus 2 hours ago

    Joke aside, I do think think someone should work on a legitimate agent for financial and business decision, management, and so on.

    Especially "decision making". I find it's one of the things that are tricky, making the AI agent optimize for actually good decisions and not just give you info or options, but create real opinion and take real decisions.

    • callamdelaney an hour ago

      Unfortunately LLM's aren't good at making decisions.

      • bofadeez 35 minutes ago

        I know they're supposed to be smarter than a year ago but you could have fooled me

        I'm in a loop with Opus 4.5 telling it "be logically consistent" and then it says "you're absolutely right" and proceeded to be logically inconsistent again for the 20th time.

    • thisisit an hour ago

      What kind of financial and business decisions? And what will be the metric for “good decision”?

      • yawnxyz 12 minutes ago

        Do we even ask these questions for existing analysts? If anything, we should be evaluating them neck to neck.

  • jvanderbot 2 hours ago

    My boss is a pretty awesome technologist, too, but has a lot of time sunk into business stuff.

    I sent this along as a joke but I doubt any of us are enthused about working for an AI.

    It would be cool to automate more of that business stuff but I suspect it's too "soft" to actually automate.

  • dijksterhuis 2 hours ago

    > We don't have meetings, we have collaborative ideation experiences

    yep, checks out.

  • tt24 2 hours ago

    The UI looks good! Is there a reason this is being shared here? Feels like a collection of tired, trite oneliners that I’d expect to see on Twitter rather than here.

    • andy99 30 minutes ago

      Agreed, it’s only superficially funny, there’s a ton left on the table that could have made it actually good, it feels like it doesn’t adequately parody CEOs or AI in a way that indicates any insight.

    • input_sh an hour ago

      Thank you brand new account, your contributions so far have clearly been more valuable!

  • zkmon 2 hours ago

    Funny. Infact, the blockchain smart contract (DAPPs) tried this before, by fully automating (they call it democratizing) the decisions. Not sure how it went.

  • tcgv 39 minutes ago

    Only Male AI-CEO avatars?

    Gender bias checked!

  • thisisit an hour ago

    I like the fun part of it. But this is clearly vibe coded slop. The awful pink colour scheme, clickable buttons which don’t do anything bang in middle of the page, the share button which doesn’t really share etc.

    And some of the messages keep repeating like carbon footprint etc. Just seems low effort and not in a fun way.

    • willis936 an hour ago

      Counterpoints: this joke isn't worth the effort to make it high quality and the jank is part of the joke. AI slop is garbage, presenting it as otherwise would be missing the point.

  • simultsop 2 hours ago

    Shut up and take my money.

  • vanschelven 2 hours ago

    in the same vein as http://developerexcuses.com/ (and presumably many others)

  • pygy_ 2 hours ago

    https://news.ycombinator.com/item?id=20059894

    Called it, six years ago :-)

    I can see boards of directors drooling at the potential savings.

    • belter 2 hours ago

      Tesla can immediately make a saving of $1 Trillion

      • auggierose 2 hours ago

        Love this one.

      • cyanydeez an hour ago

        unforuntely, that 1T is because Elon's buddies are on the board. They're a bunch of rich human centipedes.

      • tylerflick 2 hours ago

        Musk isn’t getting a trillion. Tesla sales would have to skyrocket.

        • gamblor956 an hour ago

          The package doesn't say who the buyers must be. Musk could just have his other pet companies by Teslas to meet the threshold.

        • koliber 2 hours ago

          Imagine that they do skyrocket but the RoboCEO is in charge trillion gets distributed to shareholders.

          • pixelready 2 hours ago

            Imagine that at least half the shares were held by a sovereign wealth fund that paid dividends to every citizen.

  • danenania 2 hours ago

    Though I think the CEO role is realistically one of the hardest to automate, I’d say middle management is a very juicy target.

    To the extent a manager is just organizing and coordinating rather than setting strategic direction, I think that role is well within current capabilities. It’s much easier to automate this than the work itself, assuming you have a high bar for quality.

  • shmerl an hour ago

    Is that you, Delamain?

  • Animats 2 hours ago

    Aw, it's just a joke. I thought someone was ready to really try it.

    Eventually, there will be AI CEOs, once they start outperforming humans. Capitalism requires it.

    • satisfice 2 hours ago

      Capitalism requires that capital is owned and controlled by specific people. So, no, there cannot be an AI CEO. In other words, if you say you have an AI CEO, then that entity will be under the control of someone else, whom you might as well call the real CEO.

      Just like how Twitter had a “CEO” who was some pliable female who did the bidding of the real CEO: Elon Musk.

      • zurfer an hour ago

        There are shareholders/owners and CEOs. You can certainly have an AI CEO if the board of directors wants that. Although depending on the jurisdiction CEOs might need be humans, but surely not everywhere.

        And you could even imagine AI owners with something like Bitcoin wallets. So far it wouldn't work because of prompt injections but the future could be wild.

      • Octoth0rpe an hour ago

        > Capitalism requires that capital is owned and controlled by specific people.

        That is an overly simplistic description. One can imagine a board of directors voting on which AI-CEO-as-a-service vendor to use for the next year. The 'capital' of the company is owned by company, the company is owned by the shareholders. This is not incompatible with capitalism in principle, but wouldn't surprise me if it were incompatible with some forms of incorporation.

      • groestl an hour ago

        The way AI (and capitalism really) makes CEOs obsolete is by replacing all companies with just one. So only one CEO needed eventually.

  • aussieguy1234 2 hours ago

    You can make this yourself quite easily.

    Choose a UI that lets you modify the system prompt, like open WebUI.

    Ask Claude to generate a system card for a CEO.

    Copy and paste the output into a system prompt.

    There you have it, your own AI CEO.

  • j45 2 hours ago

    Great name.

  • artursapek 38 minutes ago

    man, why does slop like this get to the front page yet my project I've been slaving away on dies in "New"

    • cheema33 20 minutes ago

      The answer is easy, if you choose to accept it.

      Your project, whatever it may be, is worse than this AI slop.

      If you think it is not, please share it here. Let us judge it. A little honest feedback might be what you really need.

  • yojat661 2 hours ago

    Can we also replace shareholders with Ai

    • canyp 2 hours ago

      I don't get why people get a boner with CEOs. They are mostly irrelevant, the real power lies further above.

      • mikepurvis an hour ago

        They're at the center of the hourglass that exists between external (board members, shareholders, customers, partners) and internal (employees) interests.

      • gscott an hour ago

        One mention of 3d printed chicken spins up a new ai CEO, several ai damage control agents, Ai apology, new ai product ads, repeat as needed.

    • speed_spread an hour ago

      Why waste GPU cycles when a simple bash script would do?

  • damion6 an hour ago

    Looks like that's a response to Linus and Linux community saying that Qualcomm chips I weren't able to run Linux what hey it's good though at least now there's internal support