Questioning the Sufficiency of Information
Searle’s “Chinese Room” thought experiment tends to be despised by strong AI enthusiasts, who seem to take issue with Searle personally because of it. Accusing both the allegory and the author of being stupid, the Systems Reply is the one offered most often. The man in the room may not understand Chinese, but surely the whole system, including book of translation, must be considered to understand Chinese.
Here then is simpler and more familiar example of how computation can differ from natural understanding which is not susceptible to any mereological Systems argument.
If any of you have use passwords which are based on a pattern of keystrokes rather than the letters on the keys, you know that you can enter your password every day without ever knowing what it is you are typing (something with a #r5f^ in it…?).
I think this is a good analogy for machine intelligence. By storing and copying procedures, a pseudo-semantic analysis can be performed, but it is an instrumental logic that has no way to access the letters of the ‘human keyboard’. The universal machine’s keyboard is blank and consists only of theoretical x,y coordinates where keys would be. No matter how good or sophisticated the machine is, it will still have no way to understand what the particular keystrokes “mean” to a person, only how they fit in with whatever set of fixed possibilities has been defined.
Taking the analogy further, the human keyboard only applies to public communication. Privately, we have no keys to strike, and entire paragraphs or books can be represented by a single thought. Unlike computers, we do not have to build our ideas up from syntactic digits. Instead the public-facing computation follows from the experienced sense of what is to be communicated in general, from the top down, and the inside out.
How large does a digital circle have to be before the circumference seems like a straight line?
Digital information has no scale or sense of relation. Code is code. Any rendering of that code into a visual experience of lines and curves is a question of graphic formatting and human optical interaction. With a universe that assumes information as fundamental, the proximity-dependent flatness or roundness of the Earth would have to be defined programmatically. Otherwise, it is simply “the case” that a person is standing on the round surface of the round Earth. Proximity is simply a value with no inherent geometric relevance.
When we resize a circle in Photoshop, for instance, the program is not transforming a real shape, it is erasing the old digital circle and creating a new, unrelated digital circle. Like a cartoon, the relation between the before and after, between one frame and the “next” is within our own interpretation, not within the information.
Very insightful!!! What no ghost in the machine? 🙂
This is a very useful way to help people understand why even with the most amazing programming, AI is still a program. Can we be fooled into anthropomorphizing the results? Why on earth would we want to?
Thank you!
Sure, thank you! I’ve always felt unsettled about the Chinese Room, that there must be a simpler, more familiar example that says the same thing. I like this one because people who work with computers should be able to make sense of it…if they actually want to.