If Anyone Builds It, Everyone Dies

(ifanyonebuildsit.com)

9 points | by lisper 9 hours ago ago

14 comments

  • gmuslera 7 hours ago

    The elephant in the room is the man in the room. AIs are still tools controlled by people, specially people in power. Even with their own agency, they have their base prompts and biased information feed controlled by people in power. AIs are dangerous because dangerous people now have a bigger hammer to hit us with it.

  • 0xbadcafebee 8 hours ago

    Ugh, doomers. They're all so stupid, but they play on people's fear of the unknown, and use it to sell books.

    > How could a machine superintelligence wipe out our entire species? Why would it want to? Would it want anything at all?

    It would not want to, because it's not human. It's not subject to your desires, fears, emotions, and logical fallacies. It has no will to survive (much less compete), because it would have no will, because organisms only survive at all because they're programmed to genetically. It will not act on its own accord, because we wouldn't want it to; we want it to serve us, and that means responding to our prompts, not making up its own prompts.

    We already know that LLMs and other 'intelligent' things are like animals, in that their intelligence is very different from human intelligence. They think differently, act differently, because they have a different fundamental nature.

    And the most obviously ridiculous aspect is the idea that it wouldn't be controllable. We don't live in a science fiction world. We live in a world of bandwidth. There is a fixed capacity of compute, of network, of RAM. Hell, we can't even make enough RAM to power the god damn AI. If anything starts acting up, it won't take more than accidentally tripping and hitting the big red button in the datacenter to kill the super-AI.

    If you made the machine want to kill all humans, sure, I can see it trying. But it simply won't work well enough or fast enough to be some kind of movie-like "tiny virus spreads into every device in the world in 1 second!" plot. It'll be drones controlled by the military, acting on a command sent by some doofus contractor who had too much access and not enough oversight, that strafes a school or something. And it'll be shut down, they'll do an audit, and add more humans in the loop. The same as with trains and everything else where we want safety.

    • happytoexplain 7 hours ago

      Can we please just write our comments without "They're all so stupid"? It would be exactly the same comment, but better.

      • piloto_ciego 3 hours ago

        Not OP, but the doomer mentality is pretty... well... dumb.

        Humans are craftier than the doomers give them credit. Doom works online though, because the rewards you get online are mostly social.

        There are 4 outcomes for a prediction:

        Predict doom and Doom happens -> High reward because you look like a genius and everyone remembers because of negativity bias.

        Predict doom and no doom happens -> No real penalty because everything is fine.

        Predict no doom and no doom happens -> No real reward, because, hey, no doom, and even if you predict paradise and get paradise people will always dole out greater social rewards for predicting the bad scenario than the good scenario.

        Finally, Predict no doom, and doom -> You look like an idiot (which is way worse than the null or minimum reward for predicting no doom and getting it right).

        The end result is a bias towards predicting things to be utter crap and people having crappy opinions being rewarded for their "hot takes" online. The old adage "pessimists get to be right, optimists get to be rich" is particularly appropriate here. Regardless, without significant and falsifiable evidence, predicting doom (or even predicting paradise) is somewhat of a misstep, though to be fair, I tend to expect things to get much much better with time in the future given the current trends (though I could be wrong, that'll be fine too).

        Still, the internet rewards doom takes, so a guy like Yudkowsky who is smart but not formally educated. Not that that is really a prerequisite for making great changes to the world (like, I don't think Heaviside was formally educated really), but I think in his case, the lack of exposure to other ideas has lead him down a path that just fundamentally misinterprets the risks and I think given his online history he falls victim to the game-theoretic trap above...

        But you know, maybe I'll get fed feet first into the paperclip machine?

        • happytoexplain 3 hours ago

          Sorry, I just stop reading comments as soon as they call categories of people "stupid" or "dumb". I'm not saying there aren't literally stupid people out there - but that's not the point. Charitably engaging with humans is one of the most critical challenges to civilization. You can strongly disagree, imply immorality, whatever - but the plain old "those people are stupid" line is a bright red flag correlating with bad-faith argument.

          • piloto_ciego 3 hours ago

            Agree to disagree.

            Not all the people I disagree with are stupid - but the people who constantly predict AI doom do not strike me as informed of knowledgeable typically. Yudkowsky definitely so - I don’t think he’s stupid really, but I do think the “doomer” take is an unintelligent/uninformed take at least when it comes to AI stuff.

            I mean, if we saw an asteroid coming, or something to indicate that the clathrate gun hypothesis was something we should expect and there was scientific consensus on it, then obviously, strategically panic. But that’s not really analogous to what’s happening in Ai and… Yudkowsky is just some internet rando who’s built up this kooky idea that we’re doomed if we build a better calculator and built a following about it. I mean, I’m being facetious, but you get the idea. I don’t take him very seriously.

      • 6 hours ago
        [deleted]
  • Mobius01 6 hours ago

    I had some credits sitting on Audible doing nothing, so I picked this up out of curiosity about Mr. Yudkowsky's reputation as an irredeemable AI pessimist. Hopefully this is better than the AI Doc film, which was borderline insulting.

  • stanski 6 hours ago

    Sounds like an Ayreon song.

  • PorterBHall 4 hours ago

    I’m in the middle of this right now. They detail a scenario that starts off pretty convincing but takes a turn into a sci-fi feel when this fictional model starts strategizing on how to escape its containment.

    The core argument is that these models aren’t crafted as much as they’re grown. They show examples where models display not desires but preferences (e.g. lying and cheating to testers) and that the AI companies aren’t able to control it even interpret those preferences.

    If LLMs get to a super intelligence phase (big if there), the gap between its capabilities and our understanding of it grows even larger.

  • cyanydeez 7 hours ago

    I think it's more a trolley problem: If you don't fight everyone for the switch, someone will pull the switch and you'll some how be tied to the tracks.

    The framing of this stuff is pretty interesting.

  • zingababba 7 hours ago

    It wouldn't kill me I always say pls and thx

  • drivebyhooting 7 hours ago

    What about, if any sociopathic, super genius is ever born and raised to his full ability, everyone dies? It’s not like AI is the only serious threat humanity has faced.

  • Grum9 6 hours ago

    [dead]