Henry Markram: Supercomputing the brain’s secrets
I’m very much in favor of this kind of research but I find it telling that the speaker mentions at the beginning one of the reasons why simulating brains is important. He says that we can’t keep testing on animals forever. A noble sentiment only if we presume that the computational simulation is in fact ‘less than’ an animal. Here is revealed the truth about computationalism, that it doesn’t take itself seriously. The pretense that life and feeling can be emulated mechanically is only possible when we think of consciousness in the terms of a toy model. This model sees consciousness as ‘what a brain does’ from the beginning, without any comprehension of the chasm between that set of neurological activities and the invisible world which is experienced through those activities.
If the brain simulation were capable of functioning as a living brain, any change in the program or manipulation of the simulation could result in loops of unimaginable suffering for the simulation. Studying the effects of torture on such a simulation of would logically be no better than torturing an animal, even worse, since the poor digital creature would never die or escape the captivity of it’s torturers.
Obviously this is not a serious consideration for AI. Nobody actually believes that what they are assembling out of coded symbols is literally alive or aware, despite how convincingly they claim to have proved it to themselves. It is a model.
When we seek to reverse engineer a conscious experience from the material mechanisms of neurology, we can rightfully expect to learn many important things about consciousness, but as the blurry images in the video might foreshadow, we can’t learn who it is that is conscious, how their world feels, etc.
It is hopeful to me that they are realizing that it is the electromagnetic patterns themselves which ‘contain’ consciousness, but they still miss completely the deeper implications of this. As it stands now, extraction of semi-coherent images from modeled electromagnetism represents hope for honing in on the formulas to translate EM coordinates into visual qualia (and other qualia by extension) but this hope fails to recognize the infinite regress of the homunculus fallacy. We see the rose, but the simulation does not. The simulation can be paused, copied and pasted, looped, edited, etc, but feeling doesn’t work that way. We have not modeled the sensorimotive experience at all. Instead we have created a dynamic CGI with which we can project our own interpretations about perception. Without those interpretations from our first hand subjective perceptions, there is still no sign of any experience within the model. It’s still only pixels and memory registers switching on and off. I suspect that the further this project progresses, the more we will have to resist the increasingly obvious failure of the model to behave meaningfully like a conscious living organism does. I think it will be fantastic, however, for neurology and for extending and improving our lives.
Recent Comments