Example of a Conversation
If you dare to argue for any new hypothesis about the deep issues of philosophy, you may notice that you run into a particular attitude which impedes any understanding. Rather than attacking the ideas proposed in the hypothesis, the criticism attempts to disqualify the idea of even suggesting a hypothesis. Typically the debate includes the following:
- Scientific attitudes which are often outdated are presumed to be the obvious and final authority on nature.
- The demand for scientific rigor is used to rule out new hypotheses before they are even considered.
- The existence of empirically valid physics is used to deny the possibility of, or the utility of investigating any underlying metaphysics.
The respect for scientific authority and technology is wielded like a blunt instrument, so that the response to someone asking “Hey, is this house maybe built on top of an ancient burial ground?” is something like “Whaat? You don’t like living in a house, huh? You want to go back to squatting in the dirt?”
Despite pointing out that this kind of rebuttal is fallacious, there is not much that can be done to convince someone who would rather be right than understand a new idea.
This is a sample of how conversations tend to go when trying to describe the limits of computation and the symbol grounding problem.
>> Craig Weinberg wrote:
>> > Here then is simpler and more familiar example of how computation can differ
>> > from natural understanding which is not susceptible to any mereological
>> > Systems argument.
>> >
>> > If any of you have use passwords which are based on a pattern of
>> > keystrokes rather than the letters on the keys, you know that you can enter your
>> > password every day without ever knowing what it is you are typing
>> > (something with a #r5f^ in it…?).
>> >
>> > I think this is a good analogy for machine intelligence. By storing and
>> > copying procedures, a pseudo-semantic analysis can be performed, but it
>> > is an instrumental logic that has no way to access the letters of the
>> > ‘human keyboard’. The universal machine’s keyboard is blank and consists only
>> > of theoretical x,y coordinates where keys would be. No matter how good or
>> > sophisticated the machine is, it will still have no way to understand
>> > what the particular keystrokes “mean” to a person, only how they fit in with
>> > whatever set of fixed possibilities has been defined.
>> >
>> > Taking the analogy further, the human keyboard only applies to public
>> > communication. Privately, we have no keys to strike, and entire
>> > paragraphs or books can be represented by a single thought. Unlike computers, we do
>> > not have to build our ideas up from syntactic digits. Instead the
>> > public-facing computation follows from the experienced sense of what is to be
>> > communicated in general, from the top down, and the inside out.SP >> I think you have a problem with the idea that a system could display
>> properties that are not obvious from examining its parts. There’s no
>> way to argue around this, you just believe it and that’s that.CW: > I don’t have a problem with the idea that a “system” could DISPLAY
> properties that are not obvious from EXAMINING its “parts”, but you overlook
> that DISPLAYING and EXAMINING are functions of consciousness only. If they
> were not, then consciousness would be superfluous. If my brain could examine
> the display of the body’s environment, then it would, and the presence or
> absence of perceptual experience would not make any difference.
>
> Systems and parts are defined by level of description – scales and scopes of
> perception and abstracted potential perception. They aren’t primitively
> real. A machine is not a machine in its own eyes, but our body is an
> expression of a single event which spans a human lifetime. A person is
> another expression of that event. The “system” of a person does not emerge
> from the activity of the body parts, as the entire coherence of the body is
> as a character within relativistically scoped perceptual experiences.
>
> I don’t think that I believe, I think that I understand. I think that you do
> not understand what I mean, but are projecting that onto me, and therefore
> have assigned a straw man to take my place. It is your straw man projection
> who must believe.
SP: Tell me what you believe so we can be clear:My understanding is that you believe that if the parts of the Chinese
Room don’t understand Chinese, then the Chinese Room can’t understand
Chinese. Have I got this wrong?
The fact that the Chinese Room can’t understand Chinese is not related to its parts, but to the category error of the root assumption that forms and functions can understand things. I see forms and functions as one of the effects of experience, not as a cause of them.
I like my examples better than the Chinese Room, because they are simpler:
1. I can type a password based on the keystrokes instead of the letters on the keys. This way no part of the “system” needs to know the letters, indeed, they could be removed altogether, thereby showing that data processing does not require all of the qualia that can be associated with it, and therefore it follows that data processing does not necessarily produce any or all qualia.
2. The functional aspects of playing cards are unrelated to the suits, their colors, the pictures of the royal cards, and the participation of the players. No digital simulation of playing card games requires any aesthetic qualities to simulate any card game.
3. The difference between a game like chess and a sport like basketball is that in chess, the game has only to do with the difficulty for the human intellect to compute all of the possibilities and prioritize them logically. Sports have strategy as well, but they differ fundamentally in that the real challenge of the game is the physical execution of the moves. A machine has no feeling so it can never participate meaningfully in a sport. It doesn’t get tired or feel pain, it need not attempt to accomplish something that it cannot accomplish, etc. If chess were a sport, completing each move would be subject to the possibility of failure and surprise, and the end can never result in checkmate, since there is always the chance of weaker pieces getting lucky and overpowering the strong. There is no Cinderella Story in real chess, the winning strategy always wins because there can be no difference between theory and reality in an information-theoretic universe.
So no, I do not “believe” this, I understand it. I do not think that the Chinese Room is valid because wholes must be identical to their parts. The Chinese Room is valid because it can (if you let it) illustrate that the difference between understanding and processing is a difference in kind rather than a difference in degree. Technically, it is a difference in kind going one way (from the quantitative to the qualitative) and a difference in degree going the other way. You can reduce a sport to a game (as in computer basketball) but you can’t turn a video game into a sport unless you bring in hardware that is physical/aesthetic rather than programmatic. Which leads me to:
4. You can’t recharge your cell phone with a program. Again, we can make a cartoon or a program in which avatars are recharged with information that we imagine is like ‘energy’ or ‘health’, or we could just program them to have infinite energy, but that can’t be translated out of the context of representation. Like an Escher painting with realistic looking staircases that are empirically impossible, the representational context is much more forgiving than the presentational context. Properties like gravity or thermodynamic irreversibility are not necessary in a picture or program. Neither are smells, pain, feeling, effort, ownership, etc. We can use programs to extend experience, but we can never replace experience itself with a program.
Recent Comments