Quanta are Flat Qualia
Quanta are Flat Qualia
> We don’t attribute qualia to gadgets that are
> smarter than us at specific tasks such as playing chess. We also don’t
> expect sci fi AIs such as Mr Data
> to have emotions or qualia…in fact we seem to have the intuition
> that they are qualialess *because* they
> are so one sidedly logical. Why would qualia help with intelligence
> anyway? If our AIs showed aesthetic flair,
> empathy, artistic creativity etc, that would be another matter. How
> could you be a great painter without colour
> qualia? But that’s not exactly intelligence.
Right! Think of intelligence as a motivation to flatten (mechanize, formulate, automate, determine, quantify) qualia. If we suppose that the depth and richness of qualia is directly proportional to its degree of proprietary privacy, and indirectly proportional to its degree of public generality, then intelligence is a feeling that wants to access objective truth without subjective feeling. The flatter the qualia, the more mileage you get out of it in terms of public application or universality. Qualia is not merely unjustifiable in functional terms, it is the polar opposite of deterministic function. This is why the correlation between logic and feeling is stereotypically antagonistic.
The problem with AI is that we are trying to accomplish intelligence from the outside in. We want to start with the flattest possible qualia (binary computation) and build understanding from the bottom up. What you wind up with is a robot that excels at pretending to understand*. It’s a reverse engineering project for something that inherently cannot be reversed.
*based on our scripted theory (computational, flat qualia skeleton) of the functional consequences of understanding rather than the full experience of understanding.
Recent Comments