6 comments

  • gweinberg 6 hours ago

    I don;t understand why people have any respect at all for Searle's"argument", it's just a bare assertion "machines's can't think", combined with some cheap misdirection. Can anyoneone argue that having Chinese characters instead of bits going in and out is something other than misdirection? Can anyone argue that having a human being acting like a cpu instead of having an actual cpu is something other than cheap misdorection?

    • speak_plainly 3 hours ago

      I think you might be missing out on what the Chinese Room thought experiment is about.

      The argument isn’t about whether machines can think, but about whether computation alone can generate understanding.

      It shows that syntax (in this case, the formal manipulation of symbols) is insufficient for semantics, or genuine meaning. That means whether you're a machine or human being, I can teach you every grammatical rule or syntactical rule of a language but that is not enough for you to understand what is being said or have meaning arise, just like in his thought experiment. From the outside it looks like you understand, but the agent in the room has no clue what meaning is being imparted. You cannot derive semantics from syntax.

      Searle is highlighting a limitation for computationalism and the idea of 'Strong AI'. No matter how sophisticated you make your machine it will never be able to achieve genuine understanding, intentionality, or consciousness because it operates purely through syntactic processes.

      This has implications beyond the thought experiment, for example, this idea has impacted Philosophy of Language, Linguistics, AI and ML, Epistemology, and Cognitive Science. To boil it down, one major implication is that we lack a rock-solid understanding or theory of how semantics arises, whether in machines or humans.

      • RaftPeople 11 minutes ago

        Slight tangent but you seem well informed so I'll ask you (I skimmed Stanford site and didn't see an obvious answer):

        Is the assumption that there is internal state and the rulebook is flexible enough that it can produce the correct output even for things that require learning and internal state?

        For example, the input describes some rules to a game and then initiates the game with some input and expects the Chinese room to produce the correct output?

        It seems that without learning+state the system would fail to produce the correct output so it couldn't possibly be said to understand.

        With learning and state, at least it can get the right answer, but that still leaves the question of whether that represents understanding or not.

      • gweinberg 2 hours ago

        I understand the assertion perfectly. I understand why people might feel it intuitively makes sense. I don't understand why anyone purports to believe that saying "Chinese characters" rather than bit sequences serves any purpose other than to confuse.

    • kbelder 3 hours ago

      I agree. At it's heart, it just relies on mysticism. There's a hidden assertion that humans are supernatural.

  • SigmundA 6 hours ago

    First time I heard of this was in Blindsight, and every time I use an LLM it just makes me think of the crew talking to Rorschach.

    Intelligence without consciousness...

    https://www.rifters.com/real/Blindsight.htm