Archive
Joscha Bach, Yulia Sandamirskaya: “The Third Age of AI: Understanding Machines that Understand”
Here’s my comments and Extra Annoying Questions™ on this recent discussion. I like and admire/respect both of them and am not claiming to have competence in the specific domains of AI development they’re speaking on, only in the metaphysical/philosophical domains that underlie them. I don’t even disagree with the merits of each of their views on how to best proceed with AI dev in the near future. What fun would it be to write about what I don’t disagree with though? My disagreements are with the big, big, big picture issues of the relationship of consciousness, information processing, consciousness, and cosmology.
Jumping right in near the beginning…
“The intensity gets associated with brightness and the flatness gets associated with the absence of brightness, with darkness”
Joscha 12:37
First of all, the (neuronal) intensity and flatness *already are functionally just as good as* brightness and darkness. There is no advantage to conjuring non-physical, non-parsimonious, unexplained qualities of visibility to accomplish the exact same thing as was already being accomplished by invisible neuronal properties of ‘intensity’ and ‘flatness’.
Secondly, where are the initial properties of intensity and flatness coming from? Why take those for granted but not sight? In what scope of perception and aesthetic modality is this particular time span presented as a separate event from the totality of events in the universe? What is qualifying these events of subatomic and atomic positional change, or grouping their separate instances of change together as “intense” or “flat”? Remember, this is invisible, intangible, and unconscious. It is unexperienced. A theoretical neuron prior to any perceptual conditioning that would make it familiar to us as anything resembling a neuron, or an object, or an image.
Third, what is qualifying the qualification of contrast, and why? In a hypothetical ideal neuron before all conscious experience and perception, the mechanisms are already doing what physical forces mechanically and inevitably demand. If there is a switch or gate shaped structure in a cell membrane that opens when ions pile up, that is what is going to happen regardless of whether there is any qualification of the piling of ions as ‘contrasting’ against any subsequent absence of piles of ions. Nothing is watching to see what happens if we don’t assume consciousness. So now we have exposed as unparsimonious and epiphenomenal to physics not only visibility (brightness and darkness) and observed qualities of neuronal activity (intensity and flatness), but also the purely qualitative evaluation of ‘contrast’. Without consciousness, there isn’t anything to cause a coherent contrast that defines the beginning and ending of an event.
- 13:42 I do like Joscha’s read of the story of Genesis as a myth describing consciousness emerging from a neurological substrate, however I question why the animals he mentions are constructed ‘in the mind’ rather than discovered. Also, why so much focus on sight? What about the other senses? We can feel the heat of the sun – why not make animals out of arrays of warm and cool pixels instead of bright and dark? Why have multiple modes of aesthetic presentation at all? Again – where is the parsimony that we need for a true solution to the hard problem / explanatory gap? If we already have molecules doing what molecules must do in a neuron, which is just move or resist motion, how and why do we suddenly reach for ‘contrast’-ing qualities? If we follow physical parsimony strictly, the brain doesn’t do any ‘constructing’ of brightness, or 3d sky, or animals. The brain is *already* constructing complex molecular shapes that do everything that a physical body could possibly evolve to do – without any sense or experience and just using a simple geometry of invisible, unexperienced forces. What would a quality of ‘control’ be doing in a physical universe of automatic, statistical-mechanical inevitables?
“I suspect that our culture actually knew, at some point, that reality, and the sense of reality and being a mind, is the ability to dream – the ability to be some kind of biological machine that dreams about a world that contains it.”
Joscha 14:28
This is what I find so frustrating to me about about Joscha’s view. It is SO CLOSE to getting the bigger picture but it doesn’t go *far enough*. Why doesn’t he see that the biological machine would also be part of the dream? The universe is not a machine that dreams (how? why? parsimony, hard problem) – it’s a dream that machines sometimes. Or to be more precise (and to advertise my multisense realism views), the universe is THE dream that *partially* divides itself into dreams. I propose that these diffracted dreams lens each other to seem like anti-dreams (concrete physical objects or abstract logical concepts) and like hyper-dreams (spiritual/psychedelic/transpersonal/mytho-poetic experiences), depending on the modalities of sense and sense-making that are available, and whether they are more adhesive to the “Holos” or more cohesive to the “Graphos” end of the universal continuum of sense.
“So what do we learn from intelligence in nature? So first if first if we want to try to build it, we need to start with some substrates. So we need to start with some representations.”
Yulia 16:08
Just noting this statement because in my understanding, a physical substrate would be a presentation rather than a re-presentation. If we are talking about the substrates in nature we are talking about what? Chemistry? Cells made of molecules? Shapes moving around? Right away Yulia’s view is seems to give objects representational abilities. I understand that the hard problem of consciousness is not supposed to be part of the scope of her talk, but I am that guy who demands that at this moment in time, it needs to be part of every talk that relates to AI!
“…and in nature the representations used seem to be not distributed. Neural networks, if you’re familiar with those, multiple units, multi-dimensional vectors represent things in the world…and not just (you know) single symbols.”
Yulia 16:20
How is this power of representation given to “units” or “vectors”, particularly if we are imagining a universe prior to consciousness? Must we assume that parts of the world just do have this power to symbolize, refer to, or seem like other parts of the world in multiple ways? That’s fine, I can set aside consciousness and listen to where she is going with this.
17:16: I like what Yulia brings up about the differences between natural and technological approaches as far as nature (biology really). She says that nature begins with dynamic stability by adaptation to change (homeostasis, yes?) while AI architecture starts with something static and then we introduce change if needed. I think that’s a good point, and relate it to my view that “AI is Inside Out“. I agree and go further to add that not only does nature begin with change and add stasis when needed but nature begins with *everything* that it is while AI begins with *nothing*…or at least it did until we started using enormous training sets of training data from the world.
- to 18:14: She’s discussing the lag between sensation and higher cognition…the delay that makes prediction useful. This is a very popular notion and it is true as far as it goes. Sure, if we look at the events in the body as a chain reaction in the micro timescale, then there is a sequence going from retina to optical nerve to visual cortex, etc – but – I would argue this is only one of many timescales that we should understand and consider. In other ways, my body’s actions are *behind* my intentions for it. My typing fingers are racing to keep up with the dictation from my inner voice, which is racing to keep up with my failing memory of the ideas that I want to express. There are many agendas that are hovering over and above my moment-to-moment perceptions, only some of which I am personally aware of at any given moment but recognize my control over them in the long term. To look only at the classical scale of time and biology is to fall prey to the fallacy of smallism.

I can identify at least six modes of causality/time with only two of them being sequential/irreversible.

The denial of other modes of causality becomes a problem if the thing we’re interested in – personal consciousness, does not exist on that timescale or causality mode that we’re assuming is the only one that is real. I don’t think that we exist in our body or brain at all. The brain doesn’t know who we are. We aren’t there, and the brain’s billions of biochemical scale agendas aren’t here. Neither description represents the other, and only the personal scale has the capacity to represent anything. I propose that they are different timescales of the same phenomenon, which is ‘consciousness’, aka nested diffractions of the aesthetic-participatory Holos. One does not cause the other in the same way that these words you see on your screen are not causing concepts to be understood, and the pixels of the screen aren’t causing a perception of them as letters. They coincide temporally, but are related only through a context of conscious perception, not built up from unconscious functions of screens, computers, bodies, or brains.
- to 25:39 …cool stuff about insect brains, neural circuits etc.
- 25:56 talking about population coding, distributed representations. I disagree with the direction that representation is supposed to take here, as far as I think that it is important to at least understand that brain functions cannot *literally* re-present anything. It is actually the image of the brain that is a presentation in our personal awareness that iconically recapitulates some aspects of the subpersonal timescale of awareness that we’re riding on top of. Again, I think we’re riding in parallel, not in series, with the phenomenon that we see as brain activity. I suggest that the brain activity never adds up to a conscious experience. The brain is the physical inflection point of what we do to the body and what the body does to us. Its activity is already a conscious experience in a smaller and larger timescale than our own, that is being used by the back end of another, personal timescale of conscious experience. What we see as the body is, in that timescale of awareness that is subpersonal rather than subconscious, a vast layer of conscious experiences that only look like mechanisms because of the perceptual lensing that diffracts perspective from all of the others. The personal scope of awareness sees the subpersonal scope of awareness as a body/cells/molecules because it’s objectifying the vast distance between that biological/zoological era of conscious experience so that it can coexist with our own. It is, in some sense, our evolutionary past – still living prehistorically. We relate to it as an alien community through microscoping instruments. I say this to point way toward a new idea. I’m not expecting that this would be common knowledge and I don’t consider that cutting edge thinkers like Sandamirskaya and Bach are ‘wrong’ for not thinking of it that way. Yes, I made this view of the universe up – but I think that it does actually work better than the alternatives that I have seen so far.
- to 34:00 talking about the unity of the brain’s physical hardware with its (presumed) computing algorithms vs the disjunction between AI algorithms and the hardware/architectures we’ve been using. Good stuff, and again aligns with my view of AI being inverted or inside out. Our computers are a bottom-up facade that imitate some symptoms of some intelligence. Natural intelligence is bottom up, top down, center out, periphery in, and everything in between. It is not an imitation or an algorithm but it uses divided conscious experience to imitate and systemize as well as having its own genuine agendas that are much more life affirming and holistic than mere survival or control. Survival and control are annoyances for intelligence. Obstructions to slow down the progress from thin scopes of anesthetized consciousness to richer aesthetics of sophisticated consciousness. Yulia is explaining why neuroscience provides a good example of working AI that we should study and emulate – I agree that we should, but not because I think it will lead to true AGI, just that it will lead to more satisfying prosthetics for our own aesthetic-participatory/experiential enhancement…which is really what we’re trying to do anyhow, rather than conjure a competing inorganic super-species that cannot be killed.
When Joscha resumes after 34:00, he discusses Dall-E and the idea of AI as ‘dreaming’ but at the same time as ‘brute force’ with superhuman training on 800 million images. Here I think the latter is mutually exclusive of the former. Brute force training yes, dreaming and learning, no. Not literally. No more than a coin sorter learns banking. No more than an emoji smiles at us. I know this is tedious but I am compelled to continue to remind the world about the pathetic fallacy. Dall-E doesn’t see anything. It doesn’t need to. It’s not dreaming up images for us. It’s a fancy cash register that we have connected to a hypnotic display of its statistical outputs. Nothing wrong with that – it’s an amazing and mostly welcome addition to our experience and understanding. It is art in a sense, but in another it’s just a Ouija board through which we see recombinations of art that human beings have made for other human beings based on what they can see. If we want to get political about it, it’s a bit of a colonial land grab for intellectual property – but I’m ok with that for the moment.
In the dialogue that follows in the middle of the video, there is some interesting and unintentionally connected discussion about the lack of global understanding of the brain and the lack of interdisciplinary communication within academia between neuroscientists, cognitive scientists, neuromorphic engineers. (philosophers of mind not invited ;( ).
Note to self: get a bit more background on the AI silver bullet of the moment, the stochastic Gradient Descent Algorithm.
Bach and Sandamirskaya discuss the benefits and limitations of the neuromorphic, embodied hardware approach vs investing more in building simulations using traditional computing hardware. We are now into the shop talk part of the presentation. I’m more of a spectator here, so it’s interesting but I have nothing to add.
By 57:12 Joscha makes an hypothesis about the failure of AI thus far to develop higher understanding.
“…the current systems are not entangled with the world, but I don’t think it’s because they are not robots, I think it’s because they’re not real time.”
To this I say it’s because ‘they’ are not real. It’s the same reason why the person in the mirror isn’t actually looking back at you. There is no person there. There is an image in our visual awareness. The mirror doesn’t even see it. There is no image for the mirror, it’s just a plane of electromagnetically conditioned metal behind glass that happens to do the same kind of thing that the matter of our eyeballs does, which is just optical physics that need not have any visible presentation at all.
The problem is the assumption that we are our body, or are in our body, or are generated by a brain/body rather than seeing physicality as a representation of consciousness on one timescale that is more fully presented in another that we can’t directly access. When we see an actor in a movie, we are seeing a moving image and hearing sound. I think that the experience of that screen image as a person is made available to us not through processing of those images and sounds but through the common sense that all images and sounds have with the visible and aural aspects of our personal experience. We see a person *through* the image rather than because of it. We see the ‘whole’ through ‘holes’ in our perception.
This is a massive intellectual shift, so I don’t expect anyone to be able to pull it off just by thinking about it for 30 seconds, even if they wanted to. It took several years of deep consideration for me. The hints are all around us though. Perceptual ‘fill-in’ is the rule, not the exception. Intuition. Presentiment. Precognitive dreams, remote viewing, and other psi. NDEs. Blindsight and synesthesia.
When we see each other as an image of a human body we are using our own limited human sight, which is also limited by the animal body>eyes>biology>chemistry>physics. All of that is only the small illuminated subset of consciousness-that-we-are-personally-conscious-of-when-we-are-normatively-awake. It should be clear that is not all that we are. I am not just these words, or the writer of these words, or a brain or a body, or a process using a brain or body, I am a conscious experience in a universe of conscious experiences that are holarchically diffracted (top down, bottom up, center out, etc). My intelligence isn’t an algorithm. My intelligence is a modality of awareness that uses algorithms and anti-algorithms alike. It feasts on understanding like olfactory-gustatory awareness feasts on food.
Even that is not all of who I am, and even “I” am not all of the larger transpersonal experience that I live through and that lives through me. I think that people who are gifted with deep understanding of mathematics and systemizing logic tend to have been conditioned to use that part of the psyche to the exclusion of other modes of sense and sense making, leaving the rich heritage of human understanding of larger psychic contexts to atrophy, or worse, reappear as a projected shadow appearance of ‘woo’ to the defensive ego, still wounded from the injury of centuries under our history of theocratic rule. This is of course very dangerous, and even more dangerous, you need that atrophied part of the psyche to understand why it is dangerous…which is why seeing the hard problem in the first place is too hard for many people, even many philosophers who have been discussing it for decades.
Synchronistically, I now return to the video at 57:54, where Yulia touches on climate change (or more importantly, from our perspective, climate destabilization) and the flawed expectation of mind uploading. I agree with her that it won’t work, although probably for different reasons. It’s not because the substrate matters – it does, but only because the substrate itself is a lensing artifact masking what is actually the totality of conscious experience.
Organic matter and biology are a living history of conscious experience that cannot be transcended without losing the significance and grounding of that history. Just as our body cannot survive by drinking an image of water, higher consciousness cannot flourish in a sandbox of abstract semiotic switches. We flourish *in spite of* the limits of body and brain, not because our experience is being generated by them.
This is not to say that I think organic matter and biology are in any way the limits of consciousness or human consciousness, but rather they are a symptom of the recipe for the development of the rich human qualities of consciousness that we value most. The actual recipe of human consciousness is made of an immense history of conscious experience, wrapped around itself in obscenely complicated ways that might echo the way that protein structures are ordered. This recipe includes seemingly senseless repetition of particular conscious experiences over vast durations of time. I don’t think that this authenticity can be faked. Unlike the patina of an antique chair or the bouquet of a vintage wine that could in theory be replicated artificially, the humanness of human consciousness depends on the actual authenticity of the experience. It actually takes billions of years of just these types of physical > chemical > organic > cellular > somatic > cerebral > anthropological > cultural > historical experiences to build the capacity to appreciate the richness and significance of those layers. Putting a huge data set end product of that chain of experience in the hands of a purely pre-organic electrochemical processor and expecting it to animate into human-like awareness is like trying to train a hydrogen bomb to sing songs around a campfire.
The Self-Seduction of Geppetto

Here, the program finds a way to invert my intentions and turn Geppetto into a robot.
My instructions were “Evil robot Pinocchio making marionnette Geppetto dance as a puppet spectacular detail superrealistic”.
Instead, Pinocchio seems to be always rendered with strings (I didn’t ask for that), and only partially a robot. Pinocchio seems to have a non-robot head and a body that ranges from non-robotic to semi-robotic. It seems ambiguous whether it is Geppetto or Pinocchio who is the evil robot puppet. At the end it appears to be a hapless Geppetto who has been taken over by the robot completely (I didn’t ask for that) and (the hallucination of?) Pinocchio is gone.
I am reminded of the Maya Angelou re-quote
“When people show you who they are, believe them the first time:
Intellectual Blind Spot and AI
The shocking blind spot that is common to so many highly intellectual thinkers, the failure of AI, and the lack of understanding about what consciousness is are different aspects of the same thing.
The intellectual function succeeds because it inverts the natural relation of what I would call sensory-motive phenomena. Natural phenomena, including physical aspects of nature, are always qualitative, participatory exchanges of experience. Because the intellect has a special purpose to freely hypothesize without being constrained by the rest of nature, intellectual experience lacks direct access to its own dependence on the rest of nature. Thinking feels like it occurs in a void. It feels like it is not feeling.
When we subscribe to a purely intellectual view of life and physics as information processing, we disqualify the aesthetic dimension of nature, which is ultimately the sole irreducible and irreplaceable resource from which all phenomena arise – not as generic recombinations of quantum-mechanical states but as an infinite font of novel aesthetic-participatory diffractions of the eternal totality of experience. This is what cannot be “simulated” or imitated…because it is originality itself.
Numbers and logic can only reflect the creativity of that resource, not generate it. No amount of binary math can replace the colors displayed on a video screen, or a conscious user that can see it. It need not be anything mystical or religious – it’s just parsimony. Information processing doesn’t need any awareness, it just needs isolated steps in a chain reaction on some physical substrate that can approximate the conditions of reliable but semi-mutable solidity. Gears, semiconductors, a pile of rocks…it doesn’t matter what the form is because there is no sense of form going on. All that is going on is low level generic changes that have no capacity to add themselves up. There’s no ’emergent properties’ outside of consciousness. Math and physics can’t ‘seem like’ anything because seeming is not a logical/mathematical or physical function.
A Multisense Realist Critique of “Human and Machine Consciousness as a Boundary Effect in the Concept Analysis Mechanism”
Let me begin by saying first that my criticism of the thoughts, ideas, and assumptions behind this hypothesis on the Hard Problem (and of all such hypotheses) does not in any way constitute a challenge to the expertise or intelligence of its authors. I have the utmost respect for anyone who takes the time to thoughtfully formulate an opinion on the matter of consciousness, and I do not in any way place myself on the same intellectual level as those who have spent their career achieving a level of skill and knowledge of mathematics and technology that is well beyond my grasp.
I do have a lifelong interest in the subject of consciousness, and most of a lifetime of experience with computer technology, however, that experience is much more limited in scope and depth than that of full-time, professional developers and engineers. Having said that, without inviting accusations of succumbing to the Dunning-Kruger effect, I dare to wonder if abundant expertise in computer science may impair our perception in this area as well, and I would desperately like to see studies performed to evaluate the cognitive bias of those scientists and philosophers who see the Hard Problem of Consciousness as a pseudo-issue that can be easily dismissed by reframing the question.
Let me begin now in good faith to mount an exhaustive and relentless attack on the assumptions and conclusions presented in the following: “Chapter 15: Human and Machine Consciousness as a Boundary Effect in the Concept Analysis Mechanism” by Richard Loosemore (PDF link), from the book Theoretical Foundations of Artificial General Intelligence, Editors: Wang, Pei, Goertzel, Ben (Eds.) I hope that this attack is not so annoying, exhausting, or offensive that it prevents readers from engaging with it and from considering the negation/inversion of the fundamental premises that it relies upon.
From the very top of the first page…
“To solve the hard problem of consciousness we observe that any cognitive system of sufficient power must get into difficulty when it tries to analyze consciousness concepts, because the mechanism that does the analysis will “bottom out” in such a way as to make the system declare these concepts to be both real and ineffable.”
Objections:
1: The phenomenon of consciousness (as distinct from the concept of consciousness) is the only possible container for qualities such as “real” or “ineffable”. It is a mistake to expect the phenomenon itself to be subject to the categories and qualities which are produced only within consciousness.
2: Neither my analysis of the concept or phenomenon of consciousness ‘bottoms out’ in the way described. I would say that consciousness is both real, more than real, less than real, effable, semi-effable, and trans-effable, but not necessarily ineffable. Consciousness is the aesthetic-participatory nesting of sensory-motive phenomena from which all other phenomena are dreived and maintained, including anesthetic, non-participatory, non-sensory, and non-motivated appearances such as those of simple matter and machines.
“This implies that science must concede that there are some aspects of the world that deserve to be called “real”, but which are beyond explanation.”
Here my understanding is that attempting to explain (ex-plain) certain aspects of consciousness is redundant since they are already ‘plain’. Blue is presented directly as blue. It is a visible phenomenon which is plain to all those who can see it and unexplainable to all those who cannot. There is nothing to explain about the phenomenon itself, as any such effort would only make the assumption that blue can be decomposed into other phenomena which are not blue. There is an implicit bias or double standard in such assumptions that any of the other phenomena which we might try to use to account for the existence of blue would also require explanations to decompose them further as well. How do we know that we are even reading words that mean what they mean to another person? As long as a sense of coherence is present, even the most surreal dream experiences can be validated within the dream as perfectly rational and real.
Even the qualifier “real” is also meaningless outside of consciousness. There can be no physical or logical phenomenon which is unreal or can ‘seem’ other than it is without consciousness to provide the seeming. The entire expectation of seeming is an artifact of some limitation on a scope of perception, not of physical or logical fact.
“Finally, behind all of these questions there is the problem of whether we can explain any of the features of consciousness in an objective way, without stepping outside the domain of consensus-based scientific enquiry and becoming lost in a wilderness of subjective opinion.”
This seems to impose a doomed constraint on to any explanations in advance, since the distinction between subjective and objective can only exist within consciousness, consciousness cannot presume to transcend itself by limiting its scope to only those qualities which consciousness itself deems ‘objective’. There is no objective arbiter of objectivity, and presuming such a standard is equivalent to or available through our scientific legacy of consensus is especially biased in consideration of the intentional reliance on instruments and methods in that scientific tradition which are designed to exclude all association with subjectivity.* To ask that an explanation of consciousness be limited to consensus science is akin to asking “Can we explain life without referring to anything beyond the fossil record?” In my understanding, science itself must expand radically to approach the phenomenon of consciousness, rather than consciousness having to be reduced to fit into our cumulative expectations about nature.
“One of the most troublesome aspects of the literature on the problem of consciousness is the widespread confusion about what exactly the word “consciousness” denotes.”
I see this as a sophist objection (ironically, I would also say that this all-too-common observation is one of the most troublesome aspects of materialistic arguments against the hard problem). Personally, I have no confusion whatsoever about what the common sense term ‘consciousness’ refers to, and neither does anyone else when it comes to the actual prospect of losing consciousness. When someone is said to have lost consciousness forever, what is lost? The totality of experience. Everything would be lost for the person whose consciousness is truly and completely lost forever. All that remains of that person would be the bodily appearances and memories in the conscious experiences of others (doctors, family members, cats, dust mites, etc). If all conscious experience were to terminate forever, what remained would be impossible to distingish from nothing at all. Indeed there would be no remaining capacity to ‘distinguish’ either.
I will skip over the four bullet points from Chalmers work in 15.1.1 (The ability to introspect or report mental states…etc), as I see them as distractions arising from specific use cases of language and the complex specifics of human psychology rather than from the simple/essential nature of consciousness as a phenomenon.
Moving on to what I see as the meat of the discussion – qualia. In this next section, much is made about the problems of communicating with others about specific phenomenal properties. I see this as another distraction and if we interrogate this definition of qualia as that which “we cannot describe to a creature that does not claim to
experience them”, we will find that it is a condition which everything in the universe fits just as well.
We cannot describe numbers, or gravity, or matter to a creature that does not claim to experience them either. Ultimately the only difference between qualia and non-qualia is that non-qualia only exist hypothetically. Things which are presumed to exist independently of subjectivity, such as matter, energy, time, space, and information are themselves concepts derived from intersubjective consensus. Just as the Flatlander experiences a sphere only as a circle of changing size, our entire view of objective facts and their objectiveness is objectively limited to those modalities of sense and sense-making which we have access to. There is no universe which is real that we could not also experience as the content of a (subjective) dream and no way to escape the constraints that a dream imposes even on logic, realism, and sanity themselves. A complete theory of consciousness cannot merely address the narrow kind of sanity that we are familiar with as thinking adults conditioned by the accumulative influence of Western society, but also of non-ordinary experiences, mystical states of consciousness, infancy, acquired savant syndrome, veridical NDEs and reincarnation accounts, and on and on.
a philosophical zombie— would behave as if it did have its own phenomenology (indeed its behavior, ex hypothesi, would be absolutely identical to its normal twin) but it would not experience any of the subjective sensations that we experience when we use our minds
As much as I revere David Chalmers brilliant insights into the Hard Problem which he named, I see the notion of a philosophical zombie as flawed from the start. While we can imagine that two biological organisms are physically identical with and without subjective experience, there is no reason to insist that they must be.
I would pose it differently and ask ‘Can a doll be created that would seem to behave in every respect like a human being, but still be only a doll?” To that question I would respond that there is no logical reason to deny that possibility, however, we cannot deny also the possibility that there are some people who at some time might not be able to feel an ‘uncanny’ sense about such a doll, even if they are not consciously able to notice that sense. The world is filled with examples of people who can pretend and act as if they are experiencing subjective states that they are not. Professional actors and sociopaths, for example, are famously able to simulate deep sentiment and emotion, summoning tears on command, etc. I would ask of the AI dev community, what if we wanted to build an AGI simulator which did not have any qualia? Suppose we wanted to study the effects of torture, could we not hope to engineer a device or program which would allow us to understand some of the effects without having to actually subject a conscious device to excruciating pain? If so, then we cannot presume qualia to emerge automatically from structure or function. We have to have a better understanding of why and how qualia exist in the first place. That is the hard problem of consciousness.
Similarly, if we know that wires from a red color-detection module are active, this tells us the cognitive level fact that the machine is detecting red, but it does not tell us if the machine is experiencing a sensation of redness, in anything like the way that we experience redness.
Here I suggest that in fact the machine is not detecting red at all, but it is detecting some physical condition that corresponds to some of our experiences of seeing red, i.e. the open-eye presence of red which correlates to 680nm wavelength electromagnetic stimulation of retinal cells. Since many people can dream and imagine red in the absence of such ophthalmological stimulation**, we cannot equate that detection with red at all.
Further, I would not even allow myself to assume that what a retinal cell or any physical instrument does in response to illumination automatically constitutes ‘detection’. Making such leaps is, in my understanding, precisely how our thinking about the hard problem of consciousness slips into circular reasoning. To see any physical device as a sensor or sense organ is to presume a phenomenological affect on a micro scale, as well as a mechanical effect described by physical force/field mathematics. If we define forces and fields purely as mechanical facts with no sensory-motive entailment, then it follows logically that no complex arrangement of such force-field mechanisms would necessarily result in any addition or emergence of such an entailment. If shining a light on a molecule changes the shape or electrical state of that molecule, every subsequent chain of physical changes effected by that cause will occur with or without any experience of redness. Any behavior that a human body or any species of body can evolve to perform could just as easily have evolved to be performed without anything but unexperienced physical chain reactions of force and field.
The trouble is that when we try to say what we mean by the hard problem, we inevitably end up by saying that something is missing from other explanations. We do not say “Here is a thing to be explained,” we say “We have the feeling that there is something that is not being addressed, in any psychological or physical account of what happens when humans (or machines) are sentient.”
To me, this is a false assumption that arises from an overly linguistic approach to the issue. I do in fact say “Here is a thing to be explained”. In fact, I could use that very same word “here” as an example of that thing. What is the physical explanation for the referent of the term “here”. What gives a physical event the distinction of being ‘here’ versus ‘there’?
The presence of something like excruciating pain can’t be dismissed on account of a compulsion to assume that ‘Ouch!’ needs to be deconstructed into nociception terminology. I would turn this entire description of ‘The trouble” around to ask the author why they feel that there is something about pain that is not communicated to anyone who experiences it directly, and how anything meaningful about that experience could be addressed by other, non-painful accounts.
On to the dialectic between skeptic and phenomenologist:
“The difficulty we have in supplying an objective definition should not be taken as grounds for dismissing the problem—rather, this lack of objective definition IS the problem!”
I’m not sure why a phenomenologist would say that. To me, the hard problem of consciousness has nothing at all to do with language. We have no problem communicating “Ouch!” any more than we have in communicating ”
“. The only problem is in the expectation that all terms should translate into all languages. There is no problem with reducing a subjective quality of phenomenal experience into a word or gesture – the hard problem is why and how there should be any inflation of non-phenomenal properties to ‘experience’ in the first place. I don’t find it hard to articulate, though many people do seem to have a hard time accepting that it makes sense.“In effect, there are certain concepts that, when analyzed, throw a monkey wrench into the analysis mechanism”
I would reconstruct that observation this way: “In effect, there are certain concepts that, when analyzed, point to facts beyond the analysis mechanism, and further beyond mechanism and analysis. These are the facts of qualia from which the experiences of analysis and mechanical appearance are derived.”
“All facets of consciousness have one thing in common: they involve some particular
types of introspection, because we “look inside” at our subjective experience of the world”
Not at all. Introspection is clearly dependent on consciousness, but so are all forms of experience. Introspection does not define consciousness, it is only a conscious experience of trying to make intellectual sense of one’s own conscious experience. Looking outside requires as much consciousness as looking inside and unconscious phenomena don’t ‘look’.
From that point in the chapter, there is a description of some perfectly plausible ideas about how to design a mechanism which would appear to us to simulate the behaviors of an intelligent thinker, but I see no connection between such a simulation and the hard problem of consciousness. The premise underestimates consciousness to begin with and then goes on to speculate on how to approximate that disqualified version of qualia production, consistently mistaking qualia for ‘concepts’ that cannot be described.
Pain is not a concept, it is a percept. Every function of the machine described could just as easily be presented as hexadecimal code, words, binary electronic states, etc. A machine could put together words that we recognize as having to do with pain, but that sense need not be available to the machine. In the mechanistic account of consciousness, sensory-motive properties are taken for granted and aesthetic-participatory elaborations of those properties that we would call human consciousness are misattributed to the elaborations of mechanical process. That “blue” cannot be communicated to someone who cannot see it does not define what blue is. Building a machine that cannot explain what is happening beyond its own mechanism doesn’t mean that qualia will automatically appear to stand in for that failure. Representation requires presentation, but presentation does not require representation. Qualia are presentations, including the presentation of representational qualities between presentations.
“Yes, but why would that short circuit in my psychological mechanism cause this particular feeling in my phenomenology?”
Yes, exactly, but that’s still not the hard problem. The hard problem is “Why would a short circuit in any mechanism cause any feeling or phenomenology in the first place? Why would feeling even be a possibility?”
“The analysis mechanism inside the mind of the philosopher who raises this objection will then come back with the verdict that the proposed explanation fails to describe the nature of conscious experience, just as other attempts to explain consciousness have failed. The proposed explanation, then, can only be internally consistent with itself if the philosopher finds the explanation wanting. There is something wickedly recursive about this situation.”
Yes, it is wickedly recursive in the same exact way that any blind faith/Emperor’s New Clothes persuasion is wickedly recursive. What is proposed here can be used to claim that any false theory about consciousness which predicts that it will be perceived as false is evidence of its (mystical, unexplained) essential truth. It is the technique of religious dogma in which doubt is defined as evidence of the unworthiness of the doubter to deserve to understand why it isn’t false.
“I am not aware of any objection to the explanation proposed in this chapter that does not rely for its force on that final step, when the philosophical objection deploys the analysis mechanism, and thereby concludes that the proposal does not work because the analysis mechanism in the head of the philosopher returned a null result.”
Let me try to make the reader aware of one such objection then. I do not use an analysis mechanism, I use the opposite – an anti-mechanism of direct participation that seeks to discover greater qualities of sense and coherence for their own aesthetic saturation. That faculty of my consciousness does not return a null result, it has instead returned a rich cosmogony detailing the relationships between a totalistic spectrum of aesthetic-participatory nestings of sensory-motive phenomena, and its dialectic, diffracted altars; matter (concrete anesthetic appearances) and information (abstract anesthetic appearances).
“I am now going to make a case that all of the various subjective phenomena associated with consciousness should be considered just as “real” as any other phenomena in the universe, but that science and philosophy must concede that consciousness has the special status of being unanalyzable.”
I’m glad that qualia are at least given a ‘real’ status! I don’t see that it’s unanalyzable though. I analyze qualia all the time. I think the limitation is that the analysis doesn’t translate into math or geometry…which is exactly what I would expect because I understand the role of math and geometry to be precisely the qualia which are presented to represent the disqualification of alienated/out of bounds qualia. We don’t experience on a geological timescale, so our access to experiences on that scale is reduced to a primitive vocabulary of approximations. I suggest that when two conscious experiences of vastly disparate timescales engage with each other, there is a mutual rendering of each other as either inanimate or intangible…as matter/object or information/concept.
In the latter parts of this chapter, the focus is on working with the established hypothesis of qualia as bottomed-out mechanical analysis. The irony of this is that I can see clearly that it is math and physics, mechanism and analysis which are the qualia of bottomed out direct perception. The computationalist and physicalist both have got the big picture turned inside out, where the limitations of language and formalism are hallucinated into sources of infinite aesthetic creativity. Sight is imagined to emerge naturally from imperfect blindness. It’s an inversion of map and territory on the grandest possible scale.
“When we say that a concept is more real the more concrete and tangible it is, what we actually mean is that it gets more real the closer it gets to the most basic of all concepts. In a sense there is a hierarchy of realness among our concepts, with those concepts that are phenomenologically rich being the most immediate and real, and with a decrease in that richness and immediacy as we go toward more abstract concepts.”
To the contrary, when we say that a concept is more real the more concrete and tangible it is, what we actually mean is that it gets more real the further it gets from the most abstract of all qualia: concepts. No concepts are as phenomenologically rich, immediate, and real as literally everything that is not a concept.
“This seems to me a unique and unusual compromise between materialist and dualist conceptions of mind. Minds are a consequence of a certain kind of computation; but they also contain some mysteries that can never be explained in a conventional way.”
Here too, I see that the opposite clearly makes more sense. Computation is a consequence of certain kinds of reductive approximations within a specific band of consciousness. To compute or calculate is actually the special and (to us) mysterious back door to the universal dream which enables dreamers to control and objectify aspects of their shared experience.
I do love all of the experiments proposed toward the end, although it seems to me that all of the positive results could be simulated by a device that is designed to simulate the same behaviors without any qualia. Of all of the experiments, I think that the mind-meld is most promising as it could possibly expose our own consciousness to phenomena beyond our models and expectations. We may be able for example, to connect our brain to the brain of a fish and really be able to tell that we are feeling what the fish is feeling. Because my view of consciousness is that it is absolutely foundational, all conscious experience overlaps at that fundamental level in an ontological way rather than merely as a locally constructed model. In other words, while some aspects of empathy may consist only of modeling the emotions of another person (as a sociopath might do), I think that there is a possibility for genuine empathy to include a factual sharing of experience, even beyond assumed boundaries of space, time, matter, and energy.
Thank you for taking the time to read this. I would not bother writing it if I didn’t think that it was important. The hard problem of consciousness may seem to some as an irrelevant, navel-gazing debate, but if I am on the right track in my hypothesis, it is critically important that we get this right before attempting to modify ourselves and our civilization based on a false assumption of qualia as information.
Respectfully and irreverently yours,
Craig Weinberg
*This point is addressed later on in the chapter: “it seems almost incoherent to propose a scientific (i.e. non-subjective) explanation for consciousness (which exists only in virtue of its pure subjectivity).”
**Not to mention the reports from people blind from birth of seeing colors during Near Death Experiences.
De-Simulating Natural Intelligence
Hi friends! I’m getting ready for my poster presentation at the Science of Consciousness conference in Interlaken:
Abstract In recent years, scientific and popular imagination has been captured by the idea that what we experience directly is a neuro-computational simulation. At the same time, there is a contradictory idea that some things that we experience, such as the existence of brains and computers, are real enough to allow us to create fully conscious and intelligent devices. This presentation will try to explain where this logic breaks down, why true intelligence may never be generated artificially, and why that is good news. Recent studies have suggested that human perception is not as limited as previously thought and that while machines can do many things better than we can, becoming conscious may not be one of them. The approach taken here can be described as a Variable Aspect Monism or Multisense Realism, and it seeks to clarify the relationship between physical form, logical function, and aesthetic participation.
In Natural Intelligence, intelligence is abstracted from within a full spectrum of aesthetically rich experience that developed over billions of years of evolving sensation and participation.
In Artificial “Intelligence”, intelligence is abstracted from outside the natural, presumably narrow range of barely aesthetic experience that has remained relatively unchanged over human timescales (but has changed over geological timescales, evolving, presumably, very different aesthetics).
In Natural Intelligence, intelligence is abstracted from within a full spectrum of aesthetically rich experience that developed over billions of years of evolving sensation and participation.
In Artificial “Intelligence”, intelligence is abstracted from outside the natural, presumably narrow range of barely aesthetic experience that has remained relatively unchanged over human timescales (but has changed over geological timescales, evolving, presumably, very different aesthetics).
What Multisense Realism proposes is more pansensitivity than panpsychism.
The standard notion of panpsychism is what I would call ‘promiscuous panpsychism’, meaning that every atom has to be ‘conscious’ in a kind of thinking, understanding way. I think that this promiscuity is what makes panpsychism unappealing to many/most people.
Under pansensitivity, intelligence 𝒅𝒊𝒗𝒆𝒓𝒈𝒆𝒔 from a totalistic absolute, diffracting through calibrated degrees of added insensitivity. It’s like in school when kids draw a colorful picture and then cover it with black crayon (the pre-big bang) and then begin to scratch it off to reveal the colors underneath. The black crayon is entropy, the scratching is negentropy, and the size of the revealed image is the degree of aesthetic saturation.
So yes, the physical substances that we use to build machines are forms of conscious experience, but they are very low level, low aesthetics which don’t necessarily scale up on their own (since they have not evolved over billions of years of natural experience by themselves).
I think that despite our success in putting our own high level aesthetic experience into code that we use to manipulate hardware, it is still only reflecting of our own natural ‘psychism’ back to us, rather than truly exporting it into the machine hardware.
Sense and Simulation
1. Nothing that can be experienced is a simulation.
There are different levels of perception (experiences of experience) and interpretation (experiences of understanding perceptions), and they can spoof each other, but all experiences are as fundamentally real any physical substance or process.
If you look in at a mirror, you are *really* seeing a *real* image, it’s just that your body isn’t really inside of a mirror. Your physical body can’t actually be seen, it can only be touched and felt. What can be seen is an image (made of color contrast shapes) that reflects both low-level tangible-public and high-level intangible-psychological conditions.
2. The Hard Problem of Consciousness can be reduced to this question: “How can a particle, force or field become sensitive?“
I think that the answer is that it cannot. Rather, we have to invert our Western presumptions about nature and understand that fields and forces are concepts that may need be replaced by a more accurate one: direct sensory-perceptive and motive-participatory phenomena – aka nested conscious experiences.
Particles are the way that the division and polarization of experience is rendered in the tangible-tactile modality of sensory-perception.
They are not sensitive, and no structure composed of particles is sensitive, just as no words made of letters generate meaning. The particles and structures, words and letters are literally place-holders…spatiotemporally anchored addresses through which experiences can be organized in increasingly complex, rich, and meaningful ways. This is what nature and the universe is: An anti-mechanical sensory experience of mechanically divided experiencers…an aesthetic holos that renders its self-diffraction through anesthetic holography.
Recent Comments