How AGI became the most consequential conspiracy theory of our time

(technologyreview.com)

66 points | by samuel2 2 hours ago ago

36 comments

  • TheAceOfHearts an hour ago

    > At the core of most definitions you’ll find the idea of a machine that can match humans on a wide range of cognitive tasks.

    I expect this definition will be proven incorrect eventually. This definition would best be described as a "human level AGI", rather than AGI. AGI is a system that matches a core set of properties, but it's not necessarily tied to capabilities. Theoretically one could create a very small resource-limited AGI. The amount of computational resources available to the AGI will probably be one of the factors what determines whether it's e.g. cat level vs human level.

    • dr_dshiv 10 minutes ago

      That’s like Peter Norvig’s definition of AGI [1] which is defined with respect to general purpose digital computers. The general intelligence refers to the foundation model that can be repurposed to many different contexts. I like that definition because it is clear.

      Currently, AGI is defined in a way where it is truly indistinguishable from superintelligence. I don’t find that helpful.

      [1] https://www.noemamag.com/artificial-general-intelligence-is-...

    • turtletontine 7 minutes ago

      What does this even mean? How can we say a definition of “AGI” is “correct” or “incorrect” when the only thing people can agree on, is we don’t have AGI yet?

  • labrador 27 minutes ago

    Ray Kurzweil and his "Age of Spiritual Machines", which I read in 1999, is much more to blame than the others like Goertzel that came after him but Kurzweil doesn't get a mention. Kurzweil is also a MIT grad closely associated with MIT and possibly the MIT Technology Review.

  • Terr_ 2 hours ago

    Very little I disagree with there, so just nibbling at the edges.

    > a scheme that’s flexible enough to sustain belief even when things don’t work out as planned; the promise of a better future that can be realized only if believers uncover hidden truths; and a hope for salvation from the horrors of this world.

    Sometimes 90% of the "hidden truths" are things already "known" by the believers, an elite knowledge that sets them apart from the sheeple. The remaining 10% is acquiring some McGuffin that finally proves they were Right-All-Along so that they can take a victory lap.

    > Superintelligence is the hot new flavor—AGI but better!—introduced as talk of AGI becomes commonplace.

    In turn, AGI was the hot new flavor—AI but better!—companies pivoted to as consumers started getting disappointed/jaded experiencing "AI" that wasn't going to give them robot butlers.

    > When those people are not shilling for utopia, they’re saving us from hell.

    Yeah, much like how hatred is not really the opposite of love, the "AI doom" folks are really just a side-sect of the "AI awesome" folks.

    > But what if there are, in fact, shadowy puppet masters here—and they’re the very people who have pushed the AGI conspiracy hardest all along? The kings of Silicon Valley are throwing everything they can get at building AGI for profit. The myth of AGI serves their interests more than anybody else’s.

    Yes, the economic engine behind all this, the potential to make money, is what really supercharges everything and lifts it out of niche communities.

  • nitwit005 17 minutes ago

    Imagine someone in the year 1900 started talking about moon landings, and risk of extinction from atomic weapons. A clearly unhinged individual.

    That the claims appear extreme and apocalyptic doesn't tell us anything about correctness.

    Yes, there are tons of people saying nonsense, but look back at events. For a while it seemed as though AI was improving extremely quickly. People extrapolated from that. I wouldn't call that extrapolation irrational or conspiratorial, even if it proves incorrect.

  • ethin 15 minutes ago

    I always find the claims that we'll have AGI impossible to believe, on the basis that nobody even knows what AGI is. The definition is so vague and hand-wavy that it might as well have never been defined in the first place. As in: I seriously can't think of a definition that would actually work. I'll explain my thought process, because I may be over-analyzing things.

    If we define it as "a machine that can match humans on a wide range of cognitive tasks," that begs the questions: which humans? Which range? What cognitive tasks? I honestly think there is no answer you could give to these three alone that wouldn't cause everything to break down again:

    For the first question, if you say "all humans," how do you measure that?

    If we use IQs? If so, then you have just created an AI which is able to match the average IQ of whatever "all" happens to be. I'm pretty sure (though have no data to prove) that the vast super-majority of people don't take IQ tests, if they've ever even heard of them. So that limits your set to "all the IQ scores we have". But again... Who is we? Which test organization? There are quite a few IQ testing centers/orgs, and they all have variations in their metrics, scoring, weights, etc.

    If you measure it by some other thing, what's the measurement? What's the thing? And, does that risk us spiraling into an infinite debate on what intelligence is? Because if so, the likelihood of us ever getting an AGI is nil. We've been trying to define intelligence for literally thousands of years and we still can't find a definition that is even halfway universal.

    If you say anything other than all, like "the smartest humans" or "the humans we tested it against," well... Do I really need to explain how that breaks?

    For the second and third questions, I honestly don't even know what you'd answer. Is there even one? Even if we collapse the second and third questions into "what wide range of cognitive tasks?", who creates the range of tasks? Are these tasks ones any human from, lets say, age 5 onward would be capable of doing? (Even if you answer yes here, what about those with learning disabilities or similar who may not be able to do whatever tasks you set at that age because it takes them longer to learn?) Or, are they tasks a PhD student would be able to do? (If you do this, then you've just broken the definition again.)

    Even if we rewrite the definition to be narrower and less hand-wavy, like, an AI which matches some core properties or something, as was suggested elsewhere in these comments, who defines the properties? How do we measure them? How do we prove that us comparing the AI against these properties doesn't cause us to optimize for the lowest common denominator?

  • jongjong 9 minutes ago

    Based on my personal experience, I feel like we've already had AGI for some time. Just based on how centralized society has become. It feels like the system is not working for the vast majority of people, yet somehow it's still holding together in spite of enormous complexity... It FEELS like there is some advanced intelligence holding things together. Some aspects of the system's functioning seems too clever to be the result of human intelligence.

    Also, in retrospect, something doesn't quite add up about the 'AI winter' narrative. It's hard to believe that so many people were studying and working on AI and it took so long given that ultimately, attention is all you need(ed).

    I studied AI at university in Australia over a decade ago, did the introductory course which was great; we learned about decision trees, Bayesian probability and machine learning; we wrote our own ANNs from scratch. then I took on the advanced course, expecting to be blown away by the material, but the whole course was about mathematics, no AI theory; even back then there was a lot of advanced material which they could have covered (e.g. evolutionary computation) but didn't... I dropped out after a week or two because of how boring it was.

    In retrospect, I feel like the course was made boring and irrelevant on purpose. I remember I even heard someone in my entourage mention that AI winter is not real... While we were supposedly in the middle of it.

    Also, I remember thinking at the time that evolutionary computation combined with ANNs was going to be the future... So I was kind of surprised how evolutionary computation seemingly disappeared out of view... In retrospect though, I think to myself; progress in that area could potentially lead to unpredictable and dangerous outcomes so it may not be discussed openly.

    Now I think; take an evolutionary algorithm and combine it with modern neural nets with attention mechanisms and you'd surely get some impressive results.

  • everdrive an hour ago

    People are interested in consciousness much the same way that we see faces in the clouds. We just think we're going to find it everywhere: weather patterns, mountains, computers, robots, in outer space, etc.

    If we were dogs, we'd invent a basic computer and start writing scifi films about whether the computers could secretly smell things. We'd ask "what does the sun smell like?"

  • retube an hour ago

    I had to click 5 (yes 5, I counted) pop up overlays to get to the article (including 2 cookie ones, cos I guess the usual one is not infuriating enough)

    • hgomersall an hour ago

      I just clicked on the article mode on Firefox. Worked perfectly.

  • AlexandrB 43 minutes ago

    One thing that struck me recently is that LLMs are necessarily limited by what's expressible with existing language. How can this ever result in AGI? A lot of human progress required inventing new language to represent new ideas and concepts. An LLM only experience of the world is what can be expressed with words. Meanwhile, even a squirrel has an intuitive understanding of the laws of gravity that are beyond the ability of an LLM to ever experience because it's stuck in a purely conceptual, silicon prison.

    • thrance 41 minutes ago

      They experience the world through tokens, which can contain more information than just words. Images can be tokenized, so can sounds, pressure sensors, etc.

  • Krasnol 2 hours ago

    > Ilya Sutskever, cofounder and former chief scientist at OpenAI, is said to have led chants of “Feel the AGI!” at team meetings.

    There is...chanting in team meetings in the US?

    Has this been going for long now or is this some new trend picked up in Asia or something like that?

    • fabian2k an hour ago

      I don't think that is new. Back when Walmart tried to expand to Germany it was reported that they had employees do some Walmart chant. As you can guess this didn't go over well with German employees.

      • aomix an hour ago

        I was coming in with the Walmart example too. The onboarding meeting told us he overheard it at a Korean manufacturer and liked it.

      • Krasnol an hour ago

        Yeah I heard that too but I assumed this is just a thing in this sector not something actually high paid employees have to participate in.

    • teeray an hour ago

      It is said chanting pleases the LLM spirits and may bring forth the promised AGI god.

    • uvaursi an hour ago

      FEEL THE AGI.

      This is a meme that will keep on giving.

    • AlexandrB an hour ago

      Last time I heard about something like this in the tech space was Theranos[1]. Doesn't really fill me with confidence about OpenAI.

      [1] https://www.mercurynews.com/2020/11/25/theranos-founder-holm...

  • scarmig an hour ago

    > It’s this myth that’s the root of the AGI conspiracy. A smarter-than-human machine that can do it all is not a technology. It’s a dream, unmoored from reality.

    So, if you assume that AGI is fake and impossible, it's... A conspiracy. Sure.

    Though, if you just finished quoting Turing (and folks like von Neumann), who thought it is possible, it would be good form for you to offer some reasoning that it's impossible, without alluding to the ineffable human soul or things like that.

    • Terr_ 43 minutes ago

      > if you assume that AGI is fake and impossible

      That seems like a bad straw-man for "AI boosterism has the following hallmarks of conspiratorial thinking".

      > offer some reasoning that it's impossible

      Further on, the author has anticipated your objection:

      > And there it is: You can’t prove it’s not true. [...] Conspiracy thinking looms again. Predictions about when AGI will arrive are made with the precision of numerologists counting down to the end of days. With no real stakes in the game, deadlines come and go with a shrug. Excuses are made and timelines are adjusted yet again.

      • scarmig 35 minutes ago

        If it makes you angry that people want to work to build AGI--people who have thought about it a lot more than you--you can't convince them to stop by repeatedly yelling "I don't think it's possible, you're a fool!"

        No more than yelling "electricity is conspiracy thinking/Satan's plaything!" repeatedly would have stopped engineers in the 19th century from studying and building with it.

        • delusional 11 minutes ago

          Yet yelling: "why would have to die from blood loss, we can transfuse some right here" At Jehova's witnesses actually does help some.

          We don't have to save everybody, but only by trying to we save some.

  • foxfired an hour ago

    I remember when ChatGPT 3.5 was going to be AGI, then 4, then 4o, etc. It's kinda like the dooms day predictions, even if they fail it's ok. Because the next one though, oh that's the real doomsday. I, for one, am waiting for a true AI Scotsman [0].

    [0]: https://idiallo.com/byte-size/ai-scotsman

    • solumunus 25 minutes ago

      I honestly don’t remember many if any people saying that at all, those would have been extremely fringe positions.

      • jvelo 14 minutes ago

        Robert Scoble was saying something of that effect about the upcoming ChatGPT 4 if I remember correctly

      • delusional 14 minutes ago

        From Sam's own blog: "We are now confident we know how to build AGI as we have traditionally understood it."

        Another quote: "Trying GPT-4.5 has been much more of a 'feel the AGI' moment among high-taste testers than I expected!"

  • Copenjin 2 hours ago

    Like conspiracies, it works only on the most fragile and on people that already have an adjacent set of beliefs. AGI/ASI is all bullshit narratives but yeah, we have useful artifacts that will get better even if they never become A*I.

  • FridayoLeary 2 hours ago

    > And there it is: You can’t prove it’s not true. “The idea that AGI is coming and that it’s right around the corner and that it’s inevitable has licensed a great many departures from reality,” says the University of Edinburgh’s Vallor. “But we really don’t have any evidence for it.”

    That's the most important paragraph in the article. All of the self serving excessive exaggerations of Sam Altman and his ilk, predicting things and throwing out figures they cannot possibly know. "ai will cure cancer, and demetia! And reverse global warming! Just give more money to my company which is a non profit and is working for the good of humanity. What is that? Do you mean to say you don't care about the good of humanity?" What is the word for such behaviour? It's not hubris, it's a combination of wild prophecy and severe main character syndrome.

    I heard once, though i have no idea if it's true that he claims he carries a remote control around with him to nuke his data centres if they ever start trying to kill everyone. Which is obviously nonsense but is exactly the kind of thing he might say.

    In the meantime they're making loads of money by claiming expertise in a field which doesn't even exist and in my opinion never will, and that's the main thing i suppose.

    • Krasnol 2 hours ago

      > I heard once, though i have no idea if it's true that he claims he carries a remote control around with him to nuke his data centres if they ever start trying to kill everyone.

      That would be quite useless even if it exists since now that you said it, the AGISGIAIsomething will surely know about it and take appropriate measures!

      • FridayoLeary an hour ago

        Oh no! Someone better phone up Sam Altman and warn him of my terrible blunder. I would hate to be the one responsible for the destruction of the entire universe.

  • SpicyLemonZest an hour ago

    > Maybe some of you think I’m an idiot: You don’t get it at all lol. But that’s kind of my point. There are insiders and outsiders. When I talk to researchers or engineers who are happy to drop AGI into the conversation as a given, it’s like they know something I don’t. But nobody’s ever been able to tell me what that something is.

    They have, including multiple times in this very article, but the author's not willing to listen. As he says later:

    > But set aside the technical objections—what if it doesn't continue to get better?—and you’re left with the claim that intelligence is a commodity you can get more of if you have the right data or compute or neural network. And it’s not.

    Modern AI researchers have proven that this is not true. They routinely increase the intelligence of systems by training on different data, using different compute, or applying different network architectures. But the author is absolutely convinced that this can't be so, so when researchers straightforwardly explain that they have done this, he's stuck trying to puzzle out what they could possibly mean. He references "Situational Awareness", an essay that includes detailed analyses of how researchers do this and why we should expect similar progress to continue, but he interprets it as a claim that "you don’t need cold, hard facts" because he presumes that the facts it presents can't possibly be true.

  • dang 16 minutes ago

    [stub for offtopicness]

    • oldestofsports 21 minutes ago

      To those who say ”just use an adblocker” - if your local cafe had a group of waiters beating puppies right inside the entrance, would you just wear earplugs and close your eyes? Oh how low humanity has sunk when we accept such garbage.

  • jahewson 2 hours ago

    I very much dislike the way this article blurs religious and doomsday thinking with conspiracy theory thinking. There’s nobody conspiring on the other side of AGI. Other than that it makes many good observations.