The Pinocchio Complex
(In response to the view of mechanical feedback being responsible for emotion.)
To my mind, this kind of ‘outside-in’ model of emotions has three fatal flaws. First, by the Behaviorist logic employed in the concept of ‘valences’, we should expect that if we create a machine that makes enough electric wheeled robots, eventually they will learn to seek sources to recharge itself. Since most people would agree that seems absurd, what I would call a ‘Deus ex Complexity’ counterargument would be compelled to plug the hole – something like ‘there is a minimum level of sophistication required before machines learn to adapt themselves’, but no real mechanism is offered to get from statistical randomness to the initial threshold of teleological motivation, nor a reason how such a threshold exists a priori, i.e. why does this magic recipe exist in the first place that changes repeating a-signifying flux into proprietary sequential signals?
Second, whether or not such a threshold exists, any subjective experience of emotion would be completely superfluous. If machines made of sticky ping pong balls eventually learn to avoid ping pong paddles, why should something feel some way about that avoidance? If we are going to go with this logic of physical mechanism, what is this redundant ‘sentiment’ doing suddenly inhabiting the process?
Third, even if such a sentiment could somehow improve the odds of success for machines (which to me again seems obviously absurd), how could such a thing come to pass? Where does it physically exist? How is it bound to matter and energy? If it’s some kind of metaphysical ’emergent property’ then how does the outside-in model have any explanatory power at all? Why not just say God did it?
I think that the only way we can entertain information-based theories of consciousness is by taking consciousness as a given, and extrapolating a Just-so story to convince ourselves that the some of the products of consciousness (information, behavior) could be it’s ultimate source. Running the scenario forward however yields no sign of a possible invention of experience in the exchange of automatic physical interactions. There really is no logical use for machines or arithmetic processes to benefit from experience over unconscious procedure calls. What seems ironic to me is that the cognitive biases which are so readily projected onto teleological arguments are somehow unquestioned with this bit of wishful thinking. Why is the mechanical puppeteer is never suspected of a Pinocchio complex?