Being Human: Mental + Representations & Decision-Making
Being Human: Mental + Representations & Decision-Making
01:20 to 20:36 Laurie Santos, Comparative Psychologist, Yale
Decision making bias in gain vs. loss.
A nice example that challenges assumptions of human exceptionalism and gives insights into the relativity of perceived risk.
20:37 – 39:39 Thomas Metzinger, Philosopher, Gutenberg University
The Self Model – Internal representation of the self as a whole (ownership).
I would argue against ‘representation’. It is a presentation, which, although it can be fooled, is not necessarily a figment of computation. Instead, it’s about resolving conflicts between levels. We rely on vision to inform our experience more than any other channel of sense, so that more subtle awareness can be subsumed. If you close your eyes, you won’t be fooled. If you try to move your hand your intention to move it won’t get lost. It is then premature to assume a literal self model as mathematical entity.
Body image displacement/dissociation. Rubber hand, video displacement, remote prosthetic robotics.
Assumes an invisible interface which hides neurocomputation, rather than neurocomputation being a visible interface which hides awareness. I disagree that there is a medium. Naive realism is actually limited realism of genuine experience, not an abstract model or program.
39:40 01:01 – David Eagleman, Neuroscientist, Baylor University
Expressing the assumptions of neuroscience – of immense sub-personal complexity underlying personal hallucination, i.e. complex = “real”, condensed = “illusory”. I think this is an important phase to pass through in understanding but ultimately needs to be overcome. When the personal layer examines the sub-personal (‘in-cognito) layers through impersonal instruments, the result is a ‘gap’ between unconscious operations and (unexplained) representation. I maintain that this view is almost correct, but from a more objective perspective is perfectly inverted.
Good stuff about how brain damage can change identity, even if part of you is unchanged. This speaks to the power of sub-personal and neurological conditions, but I think that it is a mistake to presume that consciousness in general supervenes on neurology in general. By changing what we think about and what we do with our bodies, our neurology follows that intention rather than leads it. We can also look to this assumption and follow it down the microcosm, from neurons to molecules, to atoms, to quantum, and wind up lacking a meaningful substrate that has any more explanatory power than the top level phenomenological experience. If anything, the subjective experience of perception and participation is far more insightful as to why the body is doing what it does than the probabilistic irrationality of the ultra-microcosm.
There is a disconnect also – where the neuroscientific perspective completely embraces a bio-deterministic picture of consciousness in every individuals, but a blind faith in a rationalist intervention free-will picture of social policy. Somehow we are slaves to our neurology in all matters except when it comes to redesigning our legal system. In these matters, society is suddenly not modeled as inevitable computations of interacting brain-vehicles, but as an open marketplace of disembodied ideas which can be assessed without bias and evaluated independently of neurophysiology.
There is no explanation offered to bridge this gap. How can we be bound and blinded by naive realism, yet able to understand this blindness with crystal clarity? How can we believe that we have no real free will yet casually suggest that we should choose to use our free will to intentionally contribute to social progress?
I do agree that retribution is “a Stone Age concept”, but at the same time, why should we expect that society as a whole should be able to transcend the pre-Stone Age concepts of the individuals that make it up? We can only do that if we admit that under typical conditions, we do have some genuine participation in our own thoughts and actions. We can’t take all the credit or blame, but neither can we escape it completely either.
The neurofeedback treatment for addiction that David Eagleman describes around 01:15 sounds great. My sense is that it hasn’t worked as well as it seems like it should in theory. I’m not knocking the approach, I think it’s a good start, but still rooted in mechanism, behaviorism, and ultimately the neo-phrenological assumptions of contemporary neuroscience. I don’t want to minimize the importance of this kind of research, but I think that we are missing the big picture by insisting on the software model of consciousness.
In the last ten minutes: Good stuff on pre-linguistic concepts of justice and fairness. Three month old infants choose good puppets over bad innately.
I agree that first person accounts do not describe what is going on within the brain, but neither do analyses of what is going on within the brain anticipate anything at all about consciousness, including consciousness itself. We have to take our own word for our existence to begin with, then we can figure out how it is that our experience doesn’t show up in the structures of the brain.
Approaching it that way, I find the solution is that it is actually matter in space which is the reduction and re-presentation of experience and not the other way around. Matter extends in a different way than direct subjective experience, the opposite way, so that when we look at matter we are seeing a representation of many, many experiences on many different scales and frequencies – some seem frozen in time to us, others seem to be changing so fast that they are in superposition.
Recent Comments