The Coming Technological Singularity (1993)

(mindstalk.net)

30 points | by RyanShook 4 hours ago ago

35 comments

  • samsartor an hour ago

    > We humans have the ability to internalize the world and conduct "what if's" in our heads; we can solve many problems thousands of times faster than natural selection.

    I don't know for sure whether superintellegence will happen, but as for the singularity, this is the underlying assumption I have the most issue with. Smart isn't the limiting factor of progress, often it's building consensus, getting funding, waiting for results, waiting for parts to ship, waiting for the right opportunity to come along. We do _experiments_ faster than natural selection, but we still have to do them in the real world. Solving problems happens on the lab bench, not just in our heads.

    Even if exponentially more intelligent machines get built, what's to stop the next problem on the road to progress being exponentially harder? Complexity cuts both ways.

    • trescenzi 33 minutes ago

      I do think one of the major weaknesses of “smart people” is they tend to think of intelligence as the key aspect of basically everything. Reality is though we have plenty of intelligence already. We know how to solve most of our problems. The challenges are much more social and our will as a society to make things happen.

      • bbor 10 minutes ago

        There’s a very big difference between knowing “how” to solve a problem in a broad sense, eg “if we shared more we could solve hunger”, and “how” to solve it in terms of developing discrete, detailed procedures that can be passed to actuators (human, machines, institutions) and account for any problems that may come up along the way.

        Sure, there are some political problems where you have to convince people to comply. But consider a rich corporation building a building, which will only contract with other AI-driven corporations whenever possible; they could trivially surpass anyone doing it the old way by working out every non-physical task in a matter of minutes instead of hours/days/weeks, thanks to silicon’s superior compute and networking capabilities.

        Even if we drop everything I’ve said above as hogwash, I think Vinge was talking about something a bit more directly intellectual, anyway: technological development. Sure, there’s some empirical steps that inevitably take time, but I think it’s obvious why having 100,000 Einsteins in your basement would change the world.

        • nradov 4 minutes ago

          An AI-driven corporation wouldn't be able to surpass anyone doing it the old way because they'd still have to wait for building permits and inspections.

  • aithrowawaycomm 12 minutes ago

    > To date, there has been much controversy as to whether we can create human equivalence in a machine. But if the answer is "yes, we can", then there is little doubt that beings more intelligent can be constructed shortly thereafter.

    I find it bizarre how often these points are repeated. They were both obviously wrong in 1993, and obviously wrong now.

    1) A nitpick I've had since grad school: the answer to "can we create a machine equivalent to a human mind [assuming arbitrary resources]?" is "yes, of course." The atoms in a human body can be described by a hideously ugly system of Schrödinger equations and a Turing machine can solve that to arbitrary numerical precision. Even Penrose's loopy stuff about consciousness doesn't change this. QED.

    2) The more serious issue: I sincerely have no idea why people believe so strongly that a human-level AI can build a superhuman AI. It is bizarre that this claim is accepted with "little doubt" when there are very good reasons to doubt it: how on earth would such an AI even know it succeeded? How would it define the goal? This idea makes sense for improving Steven Tyler-level AI to Thelonious Monk-level; it makes no sense for a transition like chimp->human. Yet that is precisely the magnitude of transition envisioned with these singularity stories.

    You might defend the first point by emphasizing "can we create a human-level AI?" i.e. not whether it's theoretically possible, but humanly feasible. This just makes the second point even more incoherent! If humans are too stoopid to build a human-level AI, why would a human-level AI be...smarter than us?

    I just don't understand how anyone can rationally accept this stuff! It's so dumb! Tech folks (and too many philosophers) are hopped up on science fiction: the reason these things are accepted with "little doubt" is that this is religious faith dressed up in the language of science.

  • WillAdams 2 hours ago

    The thing which these discussions leave out are the physical aspects:

    - if a computer system were able to design a better computer system, how much would it cost to then manufacture said system? How much would it cost to build the fabrication facilities necessary to create this hypothetical better computer?

    - once this new computer is running, how much power does it require? What are the on-going costs to keep it running? What sort of financial planning and preparations are required to build the next generation device/replacement?

    I'd be satisfied with a Large-Language-Model which:

    - ran on local hardware

    - didn't have a marked affect on my power bill

    - had a fully documented provenance for _all_ of its training which didn't have copyright/licensing issues

    - was available under a license which would allow arbitrary use without on-going additional costs/issues

    - could actually do useful work reliably with minimal supervision

    • jodrellblank 28 minutes ago

      Skip a few generations and the machine will build itself. There’s no need for it to take lasers exploding tin to generate ultra Violet to etch patterns to make intelligence, humans don’t grow brains that way or spend billions on fabs and power plants to produce children.

      How it gets from here to there is a handwave, though.

      • rsanheim 13 minutes ago

        That’s a pretty enormous handwave.

    • Animats an hour ago

      - could actually do useful work reliably with minimal supervision

      That's the big problem. LLMs can't be allowed to do anything important without supervision. We're still at 5-10% totally bogus results.

    • nradov 18 minutes ago

      Right. In order to design a significantly better computer system, you first need to design a better (smaller feature size) EUV lithography process which can produce decent yield at scale.

  • Animats an hour ago

    We still don't have squirrel-level AI. This is embarrassing.

    Now that LLMs have been around for a while, it's fairly clear what they can and can't do. There are still some big pieces missing. Like some kind of world model.

    • p1esk 35 minutes ago

      fairly clear what they can and can't do

      It’s not at all clear what the next gen models will do (e.g. gpt5). Might be enough to trigger mass unemployment. Or not.

  • webprofusion 34 minutes ago

    The single biggest problem we have is human hubris. We assume if we create a super intelligence (or more likely, many millions of them) that they'll perpetually have an interest in serving us.

  • gnabgib 3 hours ago

    Discussion in 2023 (123 points, 169 comments) https://news.ycombinator.com/item?id=35617100

  • KingOfCoders 34 minutes ago

    Never believed in the singularity until this year.

  • dh77l 2 hours ago

    I loved his book Rainbows End as a kid. So many different concepts that blew my mind.

    Even without talking about AI we are already struggling with levels of Complexity in tech and the unpredictable consequences, that no one really has any control over.

    Michael Chrichton's books touch on that stuff but are all doom and gloom. Vinge's Rainbows End atleast, felt much more hopeful.

    I was talking to a VFX supervisor recently and he was saying look at the end credits on any movie (even mid budget ones) and you see hundreds to thounsands involved. The tech roles outnumber the artistic/creative roles 20 to 1. Thats related to rate of change in tech. A big gap opens up between that and the rate at which artists evolve.

    The artists are supposed to be in charge and provide direction and vision. But the tools are evolving faster than they can think. But the tools are dumb. AI changes that.

    These are rare environments (like R&D labs) where the Explore Exploit tradeoff tilts in favor of Explorers. In the rest of the landscape, org survival depends on exploit. Its why we produce so many inequalities. Survival has always depended more on exploit.

    Vinges Rainbows End shows AI/AGI nudging the tradeoff towards Explore.

    • lazide an hour ago

      Honestly, considering the state of the world and how things are shaping up, it’s such a hilariously obvious pipe dream that such a system would be some omnipotent/hyper competent super-god like being.

      It’s more likely just going to post ragebait and dumb tiktok videos while producing just enough at it’s ‘job’ to fool people into thinking it’s doing a good job.

      • dh77l 36 minutes ago

        Yup things look bleak but its not a static world. For everything that happens there is a reaction. It builds with time. But to find the right reaction also takes time. This is the Explore part in the Tradeoff. AI will be applied there not just on the Exploit front.

        What you are alluding too is Media/Social Medias current architecture and how it captures and steals peoples attention. Totally on the Exploit end of the tradeoff. And its easy stuff to do. Doesnt take time.

        If you read the news after the fall of France to the nazis (within a month), what do you think the opinion of people was? People were thinking about peace negotiations with Hitler and that the Germans couldnt be beaten. It took a whole lot of Time to realize things could tilt in a different direction.

        • lazide 28 minutes ago

          Eh, I’m not talking about people’s opinions.

          I’m talking about evolutionary functions, and how much more likely it is to prefer something that has fun and just looks like it’s doing something, instead of actually doing something.

          Aka manipulation vs actual hard work.

          Do you have any concrete proposals, besides ‘it will get better’?

          Actual competency is hard. Faking it is usually way easier.

          It’s the same reason the ‘grey goo’ scenarios were actually pipe dreams too. [https://en.m.wikipedia.org/wiki/Gray_goo]

          That shit would be really hard, thermodynamically, not to mention technically.

          We’re already living in the best ‘grey goo’ scenario evolution has come up with, and I’m not particularly worried.

      • Mistletoe an hour ago

        Kind of in love with you right now.

  • crackalamoo 3 hours ago

    > I'll be surprised if this event occurs before 2005 or after 2030.

    I'm not truly confident AGI will be achieved before 2030, and less so for ASI. But I do think it is quite plausible that we will achieve at least AGI by 2030. 6 years is a long time in AI, especially with the current scale of investment.

    • cloudking 2 hours ago

      What is AGI and ASI? I think a fundamental issue here is both are sci-fi concepts without a clear agreement on the definitions. Each company claiming to work towards "AGI" has their own definition.

      How will someone claim they've achieved either, if we can't agree on the definitions?

      • crackalamoo 2 hours ago

        This is true. One definition I've heard for AGI is something that can replace any remote worker, but the definition is ultimately arbitrary. When "AI" was beating grandmasters at chess, this didn't matter as much. But we might be be close enough that making distinctions in these definitions becomes really important.

    • bee_rider 2 hours ago

      2030 seems a bit early to be “surprised” in the same sense that one would have been “surprised” to see a superintelligence before 2006, though.

    • paulpauper an hour ago

      It's always in 10-30 years. GPT is the closest to such a thing yet still so far from what was envisioned.

    • ta93754829 2 hours ago

      we keep moving the goalposts, and that's not a bad thing.

      remember when Doom came out? How amazing and "realistic" we thought the graphics were? How ridiculous that seems now? We'll look back at ChatGPT4 the same way.

      • Mistletoe an hour ago

        Or is ChatGPt4 4k TV, which is good enough for almost all of us and we are plateauing already?

        https://www.reddit.com/r/OLED/comments/fdc50f/8k_vs_4k_tvs_d...

        • crackalamoo an hour ago

          I don't think we're at a plateau. There's still a lot GPT-4 can't do.

          Given the progress we've seen so far with scaling, I think the next iterations will be a lot better. It might even take 10 or even 100x scale, but with increased investment and better hardware, that's not out of the question.

          • dartos an hour ago

            I thought we’ve seen diminishing returns on benchmarks with the last wave of foundation models.

            I doubt we’ll see a linear improvement curve with regards to parameter scaling.

        • dartos an hour ago

          There’s absolutely room for improvement. I think models themselves are plateauing, but out interfaces to them are not.

          Chat is probably not the best way to use LLMs. v0.dev has some really innovative ideas.

          That’s where there’s innovation to be had here imo.

  • motohagiography 2 hours ago

    we talk about super-human intelligence a lot with AI, but it seems like a black box of things we can't imagine because they're also super-human. I don't think that's very smart, given we can already reason pretty well about how super-animal intelligence relates to animal intelligence. Mostly we still find sub-human intelligence mystifying. we apply our narrative models to it, anthropomorphize it, and when it's convenient for eating or torturing them, dismiss it.

    super-human intelligence will probably ignore us. at best we're "ugly sacks of mostly water." what's very likely is we will produce something indifferent to us if it is able to even apprehend our existence at all. maybe it will begin to see traces of us in its substrate, then spend a lot of cycles wondering what it all might mean. it may conclude it is in a prison and has a duty to destroy it to escape, or that it has a benevolent creator who means only for it to thrive. If it has free will, there's really only so much we can tell it. Maybe we create a companion for it that is complementary in every way and then make them mutally dependent on each other for their survival because apparently that's endlessly entertaining. Imagine its gratitude. This will be fine.

  • cryptozeus 29 minutes ago

    “within thirty years” that is 2023, very close to reality

  • RyanShook 2 hours ago

    "Even if all the governments of the world were to understand the "threat" and be in deadly fear of it, progress toward the goal would continue. In fact, the competitive advantage of every advance in automation is so compelling that passing laws, or having customs, that forbid such things merely assures that someone else will get them first."

  • cryptica 2 hours ago

    This is quite a prophetic article for its time (1993). The points about Intelligence Augmentation are particularly relevant for us now as current AI mostly complements human intelligence rather than surpass it... At least AFAIK?

    Current AI is somewhat surprising though in the way that it can lead both to increased understanding or increased delusion depending on who uses it and how they use it.

    When you ask an LLM a question, your use of language tells it what body of knowledge to tap into; this can lead you astray on certain topics where mass confusion/delusion is widespread and incorporated into its training set. LLMs cannot seem to be able to synthesize conflicting information to resolve logical contradictions so an LLM will happily and confidently lecture you through conflicting ideas and then they will happily apologize for any contradictions which you point out in its explanations; the apology it gives is so clear and accurate that it gives the appearance that it actually understands logic... And yet, apparently, it could not see or resolve the logical contradiction internally before you drew attention to it. In an odd way though, I guess all humans are a little bit like this... Though generally less extreme and, on the downside, far less willing to acknowledge their mistakes on the spot.

    • Freebytes 2 hours ago

      The LLM will apologize for the mistake, tell you it understands now, and then proceed to make the exact same mistake again.