Archive
Joscha Bach, Yulia Sandamirskaya: “The Third Age of AI: Understanding Machines that Understand”
Here’s my comments and Extra Annoying Questions™ on this recent discussion. I like and admire/respect both of them and am not claiming to have competence in the specific domains of AI development they’re speaking on, only in the metaphysical/philosophical domains that underlie them. I don’t even disagree with the merits of each of their views on how to best proceed with AI dev in the near future. What fun would it be to write about what I don’t disagree with though? My disagreements are with the big, big, big picture issues of the relationship of consciousness, information processing, consciousness, and cosmology.
Jumping right in near the beginning…
“The intensity gets associated with brightness and the flatness gets associated with the absence of brightness, with darkness”
Joscha 12:37
First of all, the (neuronal) intensity and flatness *already are functionally just as good as* brightness and darkness. There is no advantage to conjuring non-physical, non-parsimonious, unexplained qualities of visibility to accomplish the exact same thing as was already being accomplished by invisible neuronal properties of ‘intensity’ and ‘flatness’.
Secondly, where are the initial properties of intensity and flatness coming from? Why take those for granted but not sight? In what scope of perception and aesthetic modality is this particular time span presented as a separate event from the totality of events in the universe? What is qualifying these events of subatomic and atomic positional change, or grouping their separate instances of change together as “intense” or “flat”? Remember, this is invisible, intangible, and unconscious. It is unexperienced. A theoretical neuron prior to any perceptual conditioning that would make it familiar to us as anything resembling a neuron, or an object, or an image.
Third, what is qualifying the qualification of contrast, and why? In a hypothetical ideal neuron before all conscious experience and perception, the mechanisms are already doing what physical forces mechanically and inevitably demand. If there is a switch or gate shaped structure in a cell membrane that opens when ions pile up, that is what is going to happen regardless of whether there is any qualification of the piling of ions as ‘contrasting’ against any subsequent absence of piles of ions. Nothing is watching to see what happens if we don’t assume consciousness. So now we have exposed as unparsimonious and epiphenomenal to physics not only visibility (brightness and darkness) and observed qualities of neuronal activity (intensity and flatness), but also the purely qualitative evaluation of ‘contrast’. Without consciousness, there isn’t anything to cause a coherent contrast that defines the beginning and ending of an event.
- 13:42 I do like Joscha’s read of the story of Genesis as a myth describing consciousness emerging from a neurological substrate, however I question why the animals he mentions are constructed ‘in the mind’ rather than discovered. Also, why so much focus on sight? What about the other senses? We can feel the heat of the sun – why not make animals out of arrays of warm and cool pixels instead of bright and dark? Why have multiple modes of aesthetic presentation at all? Again – where is the parsimony that we need for a true solution to the hard problem / explanatory gap? If we already have molecules doing what molecules must do in a neuron, which is just move or resist motion, how and why do we suddenly reach for ‘contrast’-ing qualities? If we follow physical parsimony strictly, the brain doesn’t do any ‘constructing’ of brightness, or 3d sky, or animals. The brain is *already* constructing complex molecular shapes that do everything that a physical body could possibly evolve to do – without any sense or experience and just using a simple geometry of invisible, unexperienced forces. What would a quality of ‘control’ be doing in a physical universe of automatic, statistical-mechanical inevitables?
“I suspect that our culture actually knew, at some point, that reality, and the sense of reality and being a mind, is the ability to dream – the ability to be some kind of biological machine that dreams about a world that contains it.”
Joscha 14:28
This is what I find so frustrating to me about about Joscha’s view. It is SO CLOSE to getting the bigger picture but it doesn’t go *far enough*. Why doesn’t he see that the biological machine would also be part of the dream? The universe is not a machine that dreams (how? why? parsimony, hard problem) – it’s a dream that machines sometimes. Or to be more precise (and to advertise my multisense realism views), the universe is THE dream that *partially* divides itself into dreams. I propose that these diffracted dreams lens each other to seem like anti-dreams (concrete physical objects or abstract logical concepts) and like hyper-dreams (spiritual/psychedelic/transpersonal/mytho-poetic experiences), depending on the modalities of sense and sense-making that are available, and whether they are more adhesive to the “Holos” or more cohesive to the “Graphos” end of the universal continuum of sense.
“So what do we learn from intelligence in nature? So first if first if we want to try to build it, we need to start with some substrates. So we need to start with some representations.”
Yulia 16:08
Just noting this statement because in my understanding, a physical substrate would be a presentation rather than a re-presentation. If we are talking about the substrates in nature we are talking about what? Chemistry? Cells made of molecules? Shapes moving around? Right away Yulia’s view is seems to give objects representational abilities. I understand that the hard problem of consciousness is not supposed to be part of the scope of her talk, but I am that guy who demands that at this moment in time, it needs to be part of every talk that relates to AI!
“…and in nature the representations used seem to be not distributed. Neural networks, if you’re familiar with those, multiple units, multi-dimensional vectors represent things in the world…and not just (you know) single symbols.”
Yulia 16:20
How is this power of representation given to “units” or “vectors”, particularly if we are imagining a universe prior to consciousness? Must we assume that parts of the world just do have this power to symbolize, refer to, or seem like other parts of the world in multiple ways? That’s fine, I can set aside consciousness and listen to where she is going with this.
17:16: I like what Yulia brings up about the differences between natural and technological approaches as far as nature (biology really). She says that nature begins with dynamic stability by adaptation to change (homeostasis, yes?) while AI architecture starts with something static and then we introduce change if needed. I think that’s a good point, and relate it to my view that “AI is Inside Out“. I agree and go further to add that not only does nature begin with change and add stasis when needed but nature begins with *everything* that it is while AI begins with *nothing*…or at least it did until we started using enormous training sets of training data from the world.
- to 18:14: She’s discussing the lag between sensation and higher cognition…the delay that makes prediction useful. This is a very popular notion and it is true as far as it goes. Sure, if we look at the events in the body as a chain reaction in the micro timescale, then there is a sequence going from retina to optical nerve to visual cortex, etc – but – I would argue this is only one of many timescales that we should understand and consider. In other ways, my body’s actions are *behind* my intentions for it. My typing fingers are racing to keep up with the dictation from my inner voice, which is racing to keep up with my failing memory of the ideas that I want to express. There are many agendas that are hovering over and above my moment-to-moment perceptions, only some of which I am personally aware of at any given moment but recognize my control over them in the long term. To look only at the classical scale of time and biology is to fall prey to the fallacy of smallism.

I can identify at least six modes of causality/time with only two of them being sequential/irreversible.

The denial of other modes of causality becomes a problem if the thing we’re interested in – personal consciousness, does not exist on that timescale or causality mode that we’re assuming is the only one that is real. I don’t think that we exist in our body or brain at all. The brain doesn’t know who we are. We aren’t there, and the brain’s billions of biochemical scale agendas aren’t here. Neither description represents the other, and only the personal scale has the capacity to represent anything. I propose that they are different timescales of the same phenomenon, which is ‘consciousness’, aka nested diffractions of the aesthetic-participatory Holos. One does not cause the other in the same way that these words you see on your screen are not causing concepts to be understood, and the pixels of the screen aren’t causing a perception of them as letters. They coincide temporally, but are related only through a context of conscious perception, not built up from unconscious functions of screens, computers, bodies, or brains.
- to 25:39 …cool stuff about insect brains, neural circuits etc.
- 25:56 talking about population coding, distributed representations. I disagree with the direction that representation is supposed to take here, as far as I think that it is important to at least understand that brain functions cannot *literally* re-present anything. It is actually the image of the brain that is a presentation in our personal awareness that iconically recapitulates some aspects of the subpersonal timescale of awareness that we’re riding on top of. Again, I think we’re riding in parallel, not in series, with the phenomenon that we see as brain activity. I suggest that the brain activity never adds up to a conscious experience. The brain is the physical inflection point of what we do to the body and what the body does to us. Its activity is already a conscious experience in a smaller and larger timescale than our own, that is being used by the back end of another, personal timescale of conscious experience. What we see as the body is, in that timescale of awareness that is subpersonal rather than subconscious, a vast layer of conscious experiences that only look like mechanisms because of the perceptual lensing that diffracts perspective from all of the others. The personal scope of awareness sees the subpersonal scope of awareness as a body/cells/molecules because it’s objectifying the vast distance between that biological/zoological era of conscious experience so that it can coexist with our own. It is, in some sense, our evolutionary past – still living prehistorically. We relate to it as an alien community through microscoping instruments. I say this to point way toward a new idea. I’m not expecting that this would be common knowledge and I don’t consider that cutting edge thinkers like Sandamirskaya and Bach are ‘wrong’ for not thinking of it that way. Yes, I made this view of the universe up – but I think that it does actually work better than the alternatives that I have seen so far.
- to 34:00 talking about the unity of the brain’s physical hardware with its (presumed) computing algorithms vs the disjunction between AI algorithms and the hardware/architectures we’ve been using. Good stuff, and again aligns with my view of AI being inverted or inside out. Our computers are a bottom-up facade that imitate some symptoms of some intelligence. Natural intelligence is bottom up, top down, center out, periphery in, and everything in between. It is not an imitation or an algorithm but it uses divided conscious experience to imitate and systemize as well as having its own genuine agendas that are much more life affirming and holistic than mere survival or control. Survival and control are annoyances for intelligence. Obstructions to slow down the progress from thin scopes of anesthetized consciousness to richer aesthetics of sophisticated consciousness. Yulia is explaining why neuroscience provides a good example of working AI that we should study and emulate – I agree that we should, but not because I think it will lead to true AGI, just that it will lead to more satisfying prosthetics for our own aesthetic-participatory/experiential enhancement…which is really what we’re trying to do anyhow, rather than conjure a competing inorganic super-species that cannot be killed.
When Joscha resumes after 34:00, he discusses Dall-E and the idea of AI as ‘dreaming’ but at the same time as ‘brute force’ with superhuman training on 800 million images. Here I think the latter is mutually exclusive of the former. Brute force training yes, dreaming and learning, no. Not literally. No more than a coin sorter learns banking. No more than an emoji smiles at us. I know this is tedious but I am compelled to continue to remind the world about the pathetic fallacy. Dall-E doesn’t see anything. It doesn’t need to. It’s not dreaming up images for us. It’s a fancy cash register that we have connected to a hypnotic display of its statistical outputs. Nothing wrong with that – it’s an amazing and mostly welcome addition to our experience and understanding. It is art in a sense, but in another it’s just a Ouija board through which we see recombinations of art that human beings have made for other human beings based on what they can see. If we want to get political about it, it’s a bit of a colonial land grab for intellectual property – but I’m ok with that for the moment.
In the dialogue that follows in the middle of the video, there is some interesting and unintentionally connected discussion about the lack of global understanding of the brain and the lack of interdisciplinary communication within academia between neuroscientists, cognitive scientists, neuromorphic engineers. (philosophers of mind not invited ;( ).
Note to self: get a bit more background on the AI silver bullet of the moment, the stochastic Gradient Descent Algorithm.
Bach and Sandamirskaya discuss the benefits and limitations of the neuromorphic, embodied hardware approach vs investing more in building simulations using traditional computing hardware. We are now into the shop talk part of the presentation. I’m more of a spectator here, so it’s interesting but I have nothing to add.
By 57:12 Joscha makes an hypothesis about the failure of AI thus far to develop higher understanding.
“…the current systems are not entangled with the world, but I don’t think it’s because they are not robots, I think it’s because they’re not real time.”
To this I say it’s because ‘they’ are not real. It’s the same reason why the person in the mirror isn’t actually looking back at you. There is no person there. There is an image in our visual awareness. The mirror doesn’t even see it. There is no image for the mirror, it’s just a plane of electromagnetically conditioned metal behind glass that happens to do the same kind of thing that the matter of our eyeballs does, which is just optical physics that need not have any visible presentation at all.
The problem is the assumption that we are our body, or are in our body, or are generated by a brain/body rather than seeing physicality as a representation of consciousness on one timescale that is more fully presented in another that we can’t directly access. When we see an actor in a movie, we are seeing a moving image and hearing sound. I think that the experience of that screen image as a person is made available to us not through processing of those images and sounds but through the common sense that all images and sounds have with the visible and aural aspects of our personal experience. We see a person *through* the image rather than because of it. We see the ‘whole’ through ‘holes’ in our perception.
This is a massive intellectual shift, so I don’t expect anyone to be able to pull it off just by thinking about it for 30 seconds, even if they wanted to. It took several years of deep consideration for me. The hints are all around us though. Perceptual ‘fill-in’ is the rule, not the exception. Intuition. Presentiment. Precognitive dreams, remote viewing, and other psi. NDEs. Blindsight and synesthesia.
When we see each other as an image of a human body we are using our own limited human sight, which is also limited by the animal body>eyes>biology>chemistry>physics. All of that is only the small illuminated subset of consciousness-that-we-are-personally-conscious-of-when-we-are-normatively-awake. It should be clear that is not all that we are. I am not just these words, or the writer of these words, or a brain or a body, or a process using a brain or body, I am a conscious experience in a universe of conscious experiences that are holarchically diffracted (top down, bottom up, center out, etc). My intelligence isn’t an algorithm. My intelligence is a modality of awareness that uses algorithms and anti-algorithms alike. It feasts on understanding like olfactory-gustatory awareness feasts on food.
Even that is not all of who I am, and even “I” am not all of the larger transpersonal experience that I live through and that lives through me. I think that people who are gifted with deep understanding of mathematics and systemizing logic tend to have been conditioned to use that part of the psyche to the exclusion of other modes of sense and sense making, leaving the rich heritage of human understanding of larger psychic contexts to atrophy, or worse, reappear as a projected shadow appearance of ‘woo’ to the defensive ego, still wounded from the injury of centuries under our history of theocratic rule. This is of course very dangerous, and even more dangerous, you need that atrophied part of the psyche to understand why it is dangerous…which is why seeing the hard problem in the first place is too hard for many people, even many philosophers who have been discussing it for decades.
Synchronistically, I now return to the video at 57:54, where Yulia touches on climate change (or more importantly, from our perspective, climate destabilization) and the flawed expectation of mind uploading. I agree with her that it won’t work, although probably for different reasons. It’s not because the substrate matters – it does, but only because the substrate itself is a lensing artifact masking what is actually the totality of conscious experience.
Organic matter and biology are a living history of conscious experience that cannot be transcended without losing the significance and grounding of that history. Just as our body cannot survive by drinking an image of water, higher consciousness cannot flourish in a sandbox of abstract semiotic switches. We flourish *in spite of* the limits of body and brain, not because our experience is being generated by them.
This is not to say that I think organic matter and biology are in any way the limits of consciousness or human consciousness, but rather they are a symptom of the recipe for the development of the rich human qualities of consciousness that we value most. The actual recipe of human consciousness is made of an immense history of conscious experience, wrapped around itself in obscenely complicated ways that might echo the way that protein structures are ordered. This recipe includes seemingly senseless repetition of particular conscious experiences over vast durations of time. I don’t think that this authenticity can be faked. Unlike the patina of an antique chair or the bouquet of a vintage wine that could in theory be replicated artificially, the humanness of human consciousness depends on the actual authenticity of the experience. It actually takes billions of years of just these types of physical > chemical > organic > cellular > somatic > cerebral > anthropological > cultural > historical experiences to build the capacity to appreciate the richness and significance of those layers. Putting a huge data set end product of that chain of experience in the hands of a purely pre-organic electrochemical processor and expecting it to animate into human-like awareness is like trying to train a hydrogen bomb to sing songs around a campfire.
The Self-Seduction of Geppetto

Here, the program finds a way to invert my intentions and turn Geppetto into a robot.
My instructions were “Evil robot Pinocchio making marionnette Geppetto dance as a puppet spectacular detail superrealistic”.
Instead, Pinocchio seems to be always rendered with strings (I didn’t ask for that), and only partially a robot. Pinocchio seems to have a non-robot head and a body that ranges from non-robotic to semi-robotic. It seems ambiguous whether it is Geppetto or Pinocchio who is the evil robot puppet. At the end it appears to be a hapless Geppetto who has been taken over by the robot completely (I didn’t ask for that) and (the hallucination of?) Pinocchio is gone.
I am reminded of the Maya Angelou re-quote
“When people show you who they are, believe them the first time:
Intellectual Blind Spot and AI
The shocking blind spot that is common to so many highly intellectual thinkers, the failure of AI, and the lack of understanding about what consciousness is are different aspects of the same thing.
The intellectual function succeeds because it inverts the natural relation of what I would call sensory-motive phenomena. Natural phenomena, including physical aspects of nature, are always qualitative, participatory exchanges of experience. Because the intellect has a special purpose to freely hypothesize without being constrained by the rest of nature, intellectual experience lacks direct access to its own dependence on the rest of nature. Thinking feels like it occurs in a void. It feels like it is not feeling.
When we subscribe to a purely intellectual view of life and physics as information processing, we disqualify the aesthetic dimension of nature, which is ultimately the sole irreducible and irreplaceable resource from which all phenomena arise – not as generic recombinations of quantum-mechanical states but as an infinite font of novel aesthetic-participatory diffractions of the eternal totality of experience. This is what cannot be “simulated” or imitated…because it is originality itself.
Numbers and logic can only reflect the creativity of that resource, not generate it. No amount of binary math can replace the colors displayed on a video screen, or a conscious user that can see it. It need not be anything mystical or religious – it’s just parsimony. Information processing doesn’t need any awareness, it just needs isolated steps in a chain reaction on some physical substrate that can approximate the conditions of reliable but semi-mutable solidity. Gears, semiconductors, a pile of rocks…it doesn’t matter what the form is because there is no sense of form going on. All that is going on is low level generic changes that have no capacity to add themselves up. There’s no ’emergent properties’ outside of consciousness. Math and physics can’t ‘seem like’ anything because seeming is not a logical/mathematical or physical function.
A Multisense Realist Critique of “Human and Machine Consciousness as a Boundary Effect in the Concept Analysis Mechanism”
Let me begin by saying first that my criticism of the thoughts, ideas, and assumptions behind this hypothesis on the Hard Problem (and of all such hypotheses) does not in any way constitute a challenge to the expertise or intelligence of its authors. I have the utmost respect for anyone who takes the time to thoughtfully formulate an opinion on the matter of consciousness, and I do not in any way place myself on the same intellectual level as those who have spent their career achieving a level of skill and knowledge of mathematics and technology that is well beyond my grasp.
I do have a lifelong interest in the subject of consciousness, and most of a lifetime of experience with computer technology, however, that experience is much more limited in scope and depth than that of full-time, professional developers and engineers. Having said that, without inviting accusations of succumbing to the Dunning-Kruger effect, I dare to wonder if abundant expertise in computer science may impair our perception in this area as well, and I would desperately like to see studies performed to evaluate the cognitive bias of those scientists and philosophers who see the Hard Problem of Consciousness as a pseudo-issue that can be easily dismissed by reframing the question.
Let me begin now in good faith to mount an exhaustive and relentless attack on the assumptions and conclusions presented in the following: “Chapter 15: Human and Machine Consciousness as a Boundary Effect in the Concept Analysis Mechanism” by Richard Loosemore (PDF link), from the book Theoretical Foundations of Artificial General Intelligence, Editors: Wang, Pei, Goertzel, Ben (Eds.) I hope that this attack is not so annoying, exhausting, or offensive that it prevents readers from engaging with it and from considering the negation/inversion of the fundamental premises that it relies upon.
From the very top of the first page…
“To solve the hard problem of consciousness we observe that any cognitive system of sufficient power must get into difficulty when it tries to analyze consciousness concepts, because the mechanism that does the analysis will “bottom out” in such a way as to make the system declare these concepts to be both real and ineffable.”
Objections:
1: The phenomenon of consciousness (as distinct from the concept of consciousness) is the only possible container for qualities such as “real” or “ineffable”. It is a mistake to expect the phenomenon itself to be subject to the categories and qualities which are produced only within consciousness.
2: Neither my analysis of the concept or phenomenon of consciousness ‘bottoms out’ in the way described. I would say that consciousness is both real, more than real, less than real, effable, semi-effable, and trans-effable, but not necessarily ineffable. Consciousness is the aesthetic-participatory nesting of sensory-motive phenomena from which all other phenomena are dreived and maintained, including anesthetic, non-participatory, non-sensory, and non-motivated appearances such as those of simple matter and machines.
“This implies that science must concede that there are some aspects of the world that deserve to be called “real”, but which are beyond explanation.”
Here my understanding is that attempting to explain (ex-plain) certain aspects of consciousness is redundant since they are already ‘plain’. Blue is presented directly as blue. It is a visible phenomenon which is plain to all those who can see it and unexplainable to all those who cannot. There is nothing to explain about the phenomenon itself, as any such effort would only make the assumption that blue can be decomposed into other phenomena which are not blue. There is an implicit bias or double standard in such assumptions that any of the other phenomena which we might try to use to account for the existence of blue would also require explanations to decompose them further as well. How do we know that we are even reading words that mean what they mean to another person? As long as a sense of coherence is present, even the most surreal dream experiences can be validated within the dream as perfectly rational and real.
Even the qualifier “real” is also meaningless outside of consciousness. There can be no physical or logical phenomenon which is unreal or can ‘seem’ other than it is without consciousness to provide the seeming. The entire expectation of seeming is an artifact of some limitation on a scope of perception, not of physical or logical fact.
“Finally, behind all of these questions there is the problem of whether we can explain any of the features of consciousness in an objective way, without stepping outside the domain of consensus-based scientific enquiry and becoming lost in a wilderness of subjective opinion.”
This seems to impose a doomed constraint on to any explanations in advance, since the distinction between subjective and objective can only exist within consciousness, consciousness cannot presume to transcend itself by limiting its scope to only those qualities which consciousness itself deems ‘objective’. There is no objective arbiter of objectivity, and presuming such a standard is equivalent to or available through our scientific legacy of consensus is especially biased in consideration of the intentional reliance on instruments and methods in that scientific tradition which are designed to exclude all association with subjectivity.* To ask that an explanation of consciousness be limited to consensus science is akin to asking “Can we explain life without referring to anything beyond the fossil record?” In my understanding, science itself must expand radically to approach the phenomenon of consciousness, rather than consciousness having to be reduced to fit into our cumulative expectations about nature.
“One of the most troublesome aspects of the literature on the problem of consciousness is the widespread confusion about what exactly the word “consciousness” denotes.”
I see this as a sophist objection (ironically, I would also say that this all-too-common observation is one of the most troublesome aspects of materialistic arguments against the hard problem). Personally, I have no confusion whatsoever about what the common sense term ‘consciousness’ refers to, and neither does anyone else when it comes to the actual prospect of losing consciousness. When someone is said to have lost consciousness forever, what is lost? The totality of experience. Everything would be lost for the person whose consciousness is truly and completely lost forever. All that remains of that person would be the bodily appearances and memories in the conscious experiences of others (doctors, family members, cats, dust mites, etc). If all conscious experience were to terminate forever, what remained would be impossible to distingish from nothing at all. Indeed there would be no remaining capacity to ‘distinguish’ either.
I will skip over the four bullet points from Chalmers work in 15.1.1 (The ability to introspect or report mental states…etc), as I see them as distractions arising from specific use cases of language and the complex specifics of human psychology rather than from the simple/essential nature of consciousness as a phenomenon.
Moving on to what I see as the meat of the discussion – qualia. In this next section, much is made about the problems of communicating with others about specific phenomenal properties. I see this as another distraction and if we interrogate this definition of qualia as that which “we cannot describe to a creature that does not claim to
experience them”, we will find that it is a condition which everything in the universe fits just as well.
We cannot describe numbers, or gravity, or matter to a creature that does not claim to experience them either. Ultimately the only difference between qualia and non-qualia is that non-qualia only exist hypothetically. Things which are presumed to exist independently of subjectivity, such as matter, energy, time, space, and information are themselves concepts derived from intersubjective consensus. Just as the Flatlander experiences a sphere only as a circle of changing size, our entire view of objective facts and their objectiveness is objectively limited to those modalities of sense and sense-making which we have access to. There is no universe which is real that we could not also experience as the content of a (subjective) dream and no way to escape the constraints that a dream imposes even on logic, realism, and sanity themselves. A complete theory of consciousness cannot merely address the narrow kind of sanity that we are familiar with as thinking adults conditioned by the accumulative influence of Western society, but also of non-ordinary experiences, mystical states of consciousness, infancy, acquired savant syndrome, veridical NDEs and reincarnation accounts, and on and on.
a philosophical zombie— would behave as if it did have its own phenomenology (indeed its behavior, ex hypothesi, would be absolutely identical to its normal twin) but it would not experience any of the subjective sensations that we experience when we use our minds
As much as I revere David Chalmers brilliant insights into the Hard Problem which he named, I see the notion of a philosophical zombie as flawed from the start. While we can imagine that two biological organisms are physically identical with and without subjective experience, there is no reason to insist that they must be.
I would pose it differently and ask ‘Can a doll be created that would seem to behave in every respect like a human being, but still be only a doll?” To that question I would respond that there is no logical reason to deny that possibility, however, we cannot deny also the possibility that there are some people who at some time might not be able to feel an ‘uncanny’ sense about such a doll, even if they are not consciously able to notice that sense. The world is filled with examples of people who can pretend and act as if they are experiencing subjective states that they are not. Professional actors and sociopaths, for example, are famously able to simulate deep sentiment and emotion, summoning tears on command, etc. I would ask of the AI dev community, what if we wanted to build an AGI simulator which did not have any qualia? Suppose we wanted to study the effects of torture, could we not hope to engineer a device or program which would allow us to understand some of the effects without having to actually subject a conscious device to excruciating pain? If so, then we cannot presume qualia to emerge automatically from structure or function. We have to have a better understanding of why and how qualia exist in the first place. That is the hard problem of consciousness.
Similarly, if we know that wires from a red color-detection module are active, this tells us the cognitive level fact that the machine is detecting red, but it does not tell us if the machine is experiencing a sensation of redness, in anything like the way that we experience redness.
Here I suggest that in fact the machine is not detecting red at all, but it is detecting some physical condition that corresponds to some of our experiences of seeing red, i.e. the open-eye presence of red which correlates to 680nm wavelength electromagnetic stimulation of retinal cells. Since many people can dream and imagine red in the absence of such ophthalmological stimulation**, we cannot equate that detection with red at all.
Further, I would not even allow myself to assume that what a retinal cell or any physical instrument does in response to illumination automatically constitutes ‘detection’. Making such leaps is, in my understanding, precisely how our thinking about the hard problem of consciousness slips into circular reasoning. To see any physical device as a sensor or sense organ is to presume a phenomenological affect on a micro scale, as well as a mechanical effect described by physical force/field mathematics. If we define forces and fields purely as mechanical facts with no sensory-motive entailment, then it follows logically that no complex arrangement of such force-field mechanisms would necessarily result in any addition or emergence of such an entailment. If shining a light on a molecule changes the shape or electrical state of that molecule, every subsequent chain of physical changes effected by that cause will occur with or without any experience of redness. Any behavior that a human body or any species of body can evolve to perform could just as easily have evolved to be performed without anything but unexperienced physical chain reactions of force and field.
The trouble is that when we try to say what we mean by the hard problem, we inevitably end up by saying that something is missing from other explanations. We do not say “Here is a thing to be explained,” we say “We have the feeling that there is something that is not being addressed, in any psychological or physical account of what happens when humans (or machines) are sentient.”
To me, this is a false assumption that arises from an overly linguistic approach to the issue. I do in fact say “Here is a thing to be explained”. In fact, I could use that very same word “here” as an example of that thing. What is the physical explanation for the referent of the term “here”. What gives a physical event the distinction of being ‘here’ versus ‘there’?
The presence of something like excruciating pain can’t be dismissed on account of a compulsion to assume that ‘Ouch!’ needs to be deconstructed into nociception terminology. I would turn this entire description of ‘The trouble” around to ask the author why they feel that there is something about pain that is not communicated to anyone who experiences it directly, and how anything meaningful about that experience could be addressed by other, non-painful accounts.
On to the dialectic between skeptic and phenomenologist:
“The difficulty we have in supplying an objective definition should not be taken as grounds for dismissing the problem—rather, this lack of objective definition IS the problem!”
I’m not sure why a phenomenologist would say that. To me, the hard problem of consciousness has nothing at all to do with language. We have no problem communicating “Ouch!” any more than we have in communicating ”
“. The only problem is in the expectation that all terms should translate into all languages. There is no problem with reducing a subjective quality of phenomenal experience into a word or gesture – the hard problem is why and how there should be any inflation of non-phenomenal properties to ‘experience’ in the first place. I don’t find it hard to articulate, though many people do seem to have a hard time accepting that it makes sense.“In effect, there are certain concepts that, when analyzed, throw a monkey wrench into the analysis mechanism”
I would reconstruct that observation this way: “In effect, there are certain concepts that, when analyzed, point to facts beyond the analysis mechanism, and further beyond mechanism and analysis. These are the facts of qualia from which the experiences of analysis and mechanical appearance are derived.”
“All facets of consciousness have one thing in common: they involve some particular
types of introspection, because we “look inside” at our subjective experience of the world”
Not at all. Introspection is clearly dependent on consciousness, but so are all forms of experience. Introspection does not define consciousness, it is only a conscious experience of trying to make intellectual sense of one’s own conscious experience. Looking outside requires as much consciousness as looking inside and unconscious phenomena don’t ‘look’.
From that point in the chapter, there is a description of some perfectly plausible ideas about how to design a mechanism which would appear to us to simulate the behaviors of an intelligent thinker, but I see no connection between such a simulation and the hard problem of consciousness. The premise underestimates consciousness to begin with and then goes on to speculate on how to approximate that disqualified version of qualia production, consistently mistaking qualia for ‘concepts’ that cannot be described.
Pain is not a concept, it is a percept. Every function of the machine described could just as easily be presented as hexadecimal code, words, binary electronic states, etc. A machine could put together words that we recognize as having to do with pain, but that sense need not be available to the machine. In the mechanistic account of consciousness, sensory-motive properties are taken for granted and aesthetic-participatory elaborations of those properties that we would call human consciousness are misattributed to the elaborations of mechanical process. That “blue” cannot be communicated to someone who cannot see it does not define what blue is. Building a machine that cannot explain what is happening beyond its own mechanism doesn’t mean that qualia will automatically appear to stand in for that failure. Representation requires presentation, but presentation does not require representation. Qualia are presentations, including the presentation of representational qualities between presentations.
“Yes, but why would that short circuit in my psychological mechanism cause this particular feeling in my phenomenology?”
Yes, exactly, but that’s still not the hard problem. The hard problem is “Why would a short circuit in any mechanism cause any feeling or phenomenology in the first place? Why would feeling even be a possibility?”
“The analysis mechanism inside the mind of the philosopher who raises this objection will then come back with the verdict that the proposed explanation fails to describe the nature of conscious experience, just as other attempts to explain consciousness have failed. The proposed explanation, then, can only be internally consistent with itself if the philosopher finds the explanation wanting. There is something wickedly recursive about this situation.”
Yes, it is wickedly recursive in the same exact way that any blind faith/Emperor’s New Clothes persuasion is wickedly recursive. What is proposed here can be used to claim that any false theory about consciousness which predicts that it will be perceived as false is evidence of its (mystical, unexplained) essential truth. It is the technique of religious dogma in which doubt is defined as evidence of the unworthiness of the doubter to deserve to understand why it isn’t false.
“I am not aware of any objection to the explanation proposed in this chapter that does not rely for its force on that final step, when the philosophical objection deploys the analysis mechanism, and thereby concludes that the proposal does not work because the analysis mechanism in the head of the philosopher returned a null result.”
Let me try to make the reader aware of one such objection then. I do not use an analysis mechanism, I use the opposite – an anti-mechanism of direct participation that seeks to discover greater qualities of sense and coherence for their own aesthetic saturation. That faculty of my consciousness does not return a null result, it has instead returned a rich cosmogony detailing the relationships between a totalistic spectrum of aesthetic-participatory nestings of sensory-motive phenomena, and its dialectic, diffracted altars; matter (concrete anesthetic appearances) and information (abstract anesthetic appearances).
“I am now going to make a case that all of the various subjective phenomena associated with consciousness should be considered just as “real” as any other phenomena in the universe, but that science and philosophy must concede that consciousness has the special status of being unanalyzable.”
I’m glad that qualia are at least given a ‘real’ status! I don’t see that it’s unanalyzable though. I analyze qualia all the time. I think the limitation is that the analysis doesn’t translate into math or geometry…which is exactly what I would expect because I understand the role of math and geometry to be precisely the qualia which are presented to represent the disqualification of alienated/out of bounds qualia. We don’t experience on a geological timescale, so our access to experiences on that scale is reduced to a primitive vocabulary of approximations. I suggest that when two conscious experiences of vastly disparate timescales engage with each other, there is a mutual rendering of each other as either inanimate or intangible…as matter/object or information/concept.
In the latter parts of this chapter, the focus is on working with the established hypothesis of qualia as bottomed-out mechanical analysis. The irony of this is that I can see clearly that it is math and physics, mechanism and analysis which are the qualia of bottomed out direct perception. The computationalist and physicalist both have got the big picture turned inside out, where the limitations of language and formalism are hallucinated into sources of infinite aesthetic creativity. Sight is imagined to emerge naturally from imperfect blindness. It’s an inversion of map and territory on the grandest possible scale.
“When we say that a concept is more real the more concrete and tangible it is, what we actually mean is that it gets more real the closer it gets to the most basic of all concepts. In a sense there is a hierarchy of realness among our concepts, with those concepts that are phenomenologically rich being the most immediate and real, and with a decrease in that richness and immediacy as we go toward more abstract concepts.”
To the contrary, when we say that a concept is more real the more concrete and tangible it is, what we actually mean is that it gets more real the further it gets from the most abstract of all qualia: concepts. No concepts are as phenomenologically rich, immediate, and real as literally everything that is not a concept.
“This seems to me a unique and unusual compromise between materialist and dualist conceptions of mind. Minds are a consequence of a certain kind of computation; but they also contain some mysteries that can never be explained in a conventional way.”
Here too, I see that the opposite clearly makes more sense. Computation is a consequence of certain kinds of reductive approximations within a specific band of consciousness. To compute or calculate is actually the special and (to us) mysterious back door to the universal dream which enables dreamers to control and objectify aspects of their shared experience.
I do love all of the experiments proposed toward the end, although it seems to me that all of the positive results could be simulated by a device that is designed to simulate the same behaviors without any qualia. Of all of the experiments, I think that the mind-meld is most promising as it could possibly expose our own consciousness to phenomena beyond our models and expectations. We may be able for example, to connect our brain to the brain of a fish and really be able to tell that we are feeling what the fish is feeling. Because my view of consciousness is that it is absolutely foundational, all conscious experience overlaps at that fundamental level in an ontological way rather than merely as a locally constructed model. In other words, while some aspects of empathy may consist only of modeling the emotions of another person (as a sociopath might do), I think that there is a possibility for genuine empathy to include a factual sharing of experience, even beyond assumed boundaries of space, time, matter, and energy.
Thank you for taking the time to read this. I would not bother writing it if I didn’t think that it was important. The hard problem of consciousness may seem to some as an irrelevant, navel-gazing debate, but if I am on the right track in my hypothesis, it is critically important that we get this right before attempting to modify ourselves and our civilization based on a false assumption of qualia as information.
Respectfully and irreverently yours,
Craig Weinberg
*This point is addressed later on in the chapter: “it seems almost incoherent to propose a scientific (i.e. non-subjective) explanation for consciousness (which exists only in virtue of its pure subjectivity).”
**Not to mention the reports from people blind from birth of seeing colors during Near Death Experiences.
De-Simulating Natural Intelligence
Hi friends! I’m getting ready for my poster presentation at the Science of Consciousness conference in Interlaken:
Abstract In recent years, scientific and popular imagination has been captured by the idea that what we experience directly is a neuro-computational simulation. At the same time, there is a contradictory idea that some things that we experience, such as the existence of brains and computers, are real enough to allow us to create fully conscious and intelligent devices. This presentation will try to explain where this logic breaks down, why true intelligence may never be generated artificially, and why that is good news. Recent studies have suggested that human perception is not as limited as previously thought and that while machines can do many things better than we can, becoming conscious may not be one of them. The approach taken here can be described as a Variable Aspect Monism or Multisense Realism, and it seeks to clarify the relationship between physical form, logical function, and aesthetic participation.
In Natural Intelligence, intelligence is abstracted from within a full spectrum of aesthetically rich experience that developed over billions of years of evolving sensation and participation.
In Artificial “Intelligence”, intelligence is abstracted from outside the natural, presumably narrow range of barely aesthetic experience that has remained relatively unchanged over human timescales (but has changed over geological timescales, evolving, presumably, very different aesthetics).
In Natural Intelligence, intelligence is abstracted from within a full spectrum of aesthetically rich experience that developed over billions of years of evolving sensation and participation.
In Artificial “Intelligence”, intelligence is abstracted from outside the natural, presumably narrow range of barely aesthetic experience that has remained relatively unchanged over human timescales (but has changed over geological timescales, evolving, presumably, very different aesthetics).
What Multisense Realism proposes is more pansensitivity than panpsychism.
The standard notion of panpsychism is what I would call ‘promiscuous panpsychism’, meaning that every atom has to be ‘conscious’ in a kind of thinking, understanding way. I think that this promiscuity is what makes panpsychism unappealing to many/most people.
Under pansensitivity, intelligence 𝒅𝒊𝒗𝒆𝒓𝒈𝒆𝒔 from a totalistic absolute, diffracting through calibrated degrees of added insensitivity. It’s like in school when kids draw a colorful picture and then cover it with black crayon (the pre-big bang) and then begin to scratch it off to reveal the colors underneath. The black crayon is entropy, the scratching is negentropy, and the size of the revealed image is the degree of aesthetic saturation.
So yes, the physical substances that we use to build machines are forms of conscious experience, but they are very low level, low aesthetics which don’t necessarily scale up on their own (since they have not evolved over billions of years of natural experience by themselves).
I think that despite our success in putting our own high level aesthetic experience into code that we use to manipulate hardware, it is still only reflecting of our own natural ‘psychism’ back to us, rather than truly exporting it into the machine hardware.
Can Effort Be Simulated?
This may seem like an odd question, but I think that it is a great one if you’re thinking about AI and the hard problem of consciousness.
Let’s say I want my dishwasher to feel the sense of effort that I feel when I wash dishes. How would I do it? It could make groaning noises or seem to procrastinate by refusing to turn on for days on end, but this would be completely pointless from a practical perspective and it would only seem like effort in my imagination. In reality, any machine can be made to perform any function that it is able to do for as long as the physical parts hold up without any effort on anyone’s part. That’s why they are machines. That’s why we replace human labor with robot labor…because it’s not really labor at all.
It is very popular to think of human beings as a kind of machine and the brain as a kind of computer, but imagine if that were really true. You could wash dishes for your entire lifetime and do nothing else. If someone wanted a house, you could simply build it for them. Machines are useful precisely because they don’t have to try to do anything. They have no sense of effort. They don’t care what they do or don’t do.
You might say, “There’s nothing special about that. Biological organisms just evolved to have this sense of effort to model physiological limits.” Ok, but what possible value would that have to survival? Under what circumstances would it serve an organism to work less than the maximum that it could physiologically? Any consideration such as conserving energy for the Winter would naturally be rolled into the maximum allowed by the regulatory systems of the body.
So, I say no. Effort cannot be simulated. Effort is not equal to energy or time. It is a feeling which is so powerful that it dictates everything that we are able to do and unable to do. Effort is a telltale sign of consciousness. If we could sleep while we do the dishes, we would, because we would not have to feel the discomfort of expending effort to do it.
Any computer, AI, or robot that would be useful to us could not possibly have a sense of its own efforts as being difficult. Once we understand how a sense of effort is truly antithetical to machine behaviors, perhaps we can then begin to see why consciousness in general cannot be simulated. How would an AI that has no sense of not wanting to do the dishes every be able to truly understand what activities are pleasurable and what are painful?
Joscha Bach: We need to understand the nature of AI to understand who we are – Part 2
This is the second part of my comments on Nikola Danaylov’s interview of Joscha Bach: https://www.singularityweblog.com/joscha-bach/
My commentary on the first hour is here. Please watch or listen to the podcast as there is a lot that is omitted and paraphrased in this post. It’s a very fast paced, high-density conversation, and I would recommend listening to the interview in chunks and following along here for my comments if you’re interested.
1:00:00 – 1:10:00
JB – Conscious attention in a sense is the ability to make indexed memories that I can later recall. I also store the expected result and the triggering condition. When do I expect the result to be visible? Later I have feedback about whether the decision was good or not. I compare result I expected with the result that I got and I can undo the decision that I made back then. I can change the model or reinforce it. I think that this is the primary mode of learning that we use, beyond just associative learning.
JB – 1:01:00 Consciousness means that you will remember what you had attended to. You have this protocol of ‘attention’. The memory of the binding state itself, the memory of being in that binding state where you have this observation that combines as many perceptual features as possible into a single function. The memory of that is phenomenal experience. The act of recalling this from the protocol is Access Consciousness. You need to train the attentional system so it knows where you store your backend cognitive architecture. This is recursive access to the attentional protocol, you remember when you make the recall. You don’t do this all the time, only when you want to train this. This is reflexive consciousness. It’s the memory of the access.
CW – By that definition, I would ask if consciousness couldn’t exist just as well without any phenomenal qualities at all. It is easy to justify consciousness as a function after the fact, but I think that this seduces us into thinking that something impossible can become possible just because it could provide some functionality. To say that phenomenal experience is a memory of a function that combines perceptual features is to presume that there would be some way for a computer program to access its RAM as perceptual features rather than as the (invisible, unperceived) states of the RAM hardware itself.
JB – Then there is another thing, the self. The self is a model of what it would be like to be a person. The brain is not a person. The brain cannot feel anything, it’s a physical system. Neurons cannot feel anything, they’re just little molecular machines with a Turing machine inside of them. They cannot even approximate arbitrary function, except by evolution, which takes a very long time. What do we do if you are a brain that figures out that it would be very useful to know what it is like to be a person? It makes one. It makes a simulation of a person, a simulacrum to be more clear. A simulation basically is isomorphic in the behavior of a person, and that thing is pretending to be a person, it’s a story about a person. You and me are persons, we are selves. We are stories in a movie that the brain is creating. We are characters in that movie. The movie is a complete simulation, a VR that is running in the neocortex.
You and me are characters in this VR. In that character, the brain writes our experiences, so we *feel* what it’s like to be exposed to the reward function. We feel what it’s like to be in our universe. We don’t feel that we are a story because that is not very useful knowledge to have. Some people figure it out and they depersonalize. They start identifying with the mind itself or lose all identification. That doesn’t seem to be a useful condition. The brain is normally set up so that the self thinks that its real, and gets access to the language center, and we can talk to each other, and here we are. The self is the thing that thinks that it remembers the contents of its attention. This is why we are conscious. Some people think that a simulation cannot be conscious, only a physical system can, but they’ve got it completely backwards. A physical system cannot be conscious, only a simulation can be conscious. Consciousness is a simulated property of a simulated self.
CW – To say “The self is a model of what it would be like to be a person” seems to be circular reasoning. The self is already what it is like to be a person. If it were a model, then it would be a model of what it’s like to be a computer program with recursively binding (binding) states. Then the question becomes, why would such a model have any “what it’s like to be” properties at all? Until we can explain exactly how and why a phenomenal property is an improvement over the absence of a phenomenal property for a machine, there’s a big problem with assuming the role of consciousness or self as ‘model’ for unconscious mechanisms and conditions. Biological machines don’t need to model, they just need to behave in the ways that tend toward survival and reproduction.
(JB) “The brain is not a person. The brain cannot feel anything, it’s a physical system. Neurons cannot feel anything, they’re just little molecular machines with a Turing machine inside of them”.
CW – I agree with this, to the extent that I agree that if there were any such thing as *purely* physical structures, they would not feel anything, and they would just be tangible geometric objects in public space. I think that rather than physical activity somehow leading to emergent non-physical ‘feelings’ it makes more sense to me that physics is made of “feelings” which are so distant and different from our own that they are rendered tangible geometric objects. It could be that physical structures appear in these limited modes of touch perception rather than in their own native spectrum of experience because that are much slower/faster and older than our own.
To say that neurons or brains feel would be, in my view, a category error since feeling is not something that a shape can logically do, just by Occam’s Razor, and if we are being literal, neurons and brains are nothing but three-dimensional shapes. The only powers that a shape could logically have are geometric powers. We know from analyzing our dreams that a feeling can be symbolized as a seemingly solid object or a place, but a purely geometric cell or organ would have no way to access symbols unless consciousness and symbols are assumed in the first place.
If a brain has the power to symbolize things, then we shouldn’t call it physical. The brain does a lot of physical things but if we can’t look into the tissue of the brain and see some physical site of translation from organic chemistry into something else, then we should not assume that such a transduction is physical. The same goes for computation. If we don’t find a logical function that changes algorithms into phenomenal presentations then we should not assume that such a transduction is computational.
(JB) “What do we do if you are a brain that figures out that it would be very useful to know what it is like to be a person? It makes one. It makes a simulation of a person, a simulacrum to be more clear.”
CW – Here also the reasoning seems circular. Useful to know what? “What it is like” doesn’t have to mean anything to a machine or program. To me this is like saying that a self-driving car would find it useful to create a dashboard and pretend that it is driven by a person using that dashboard rather than being driven directly by the algorithms that would be used to produce the dashboard.
(JB) “A simulation basically is isomorphic in the behavior of a person, and that thing is pretending to be a person, it’s a story about a person. You and me are persons, we are selves. We are stories in a movie that the brain is creating.”
CW – I have thought of it that way, but now I think that it makes more sense if we see both the brain and the person as parts of a movie that is branching off from a larger movie. I propose that timescale differentiation is the primary mechanism of this branching, although timescale differentiation is only one sort of perceptual lensing that allows experiences to include and exclude each other.
I think that we might be experiential fragments of an eternal experience, and a brain is a kind of icon that represents part of the story of that fragmentation. The brain is a process made of other processes, which are all experiences that have been perceptually lensed by the senses of touch and sight to appear as tangible and visible shapes.
The brain has no mechanical reason to make movies, it just has to control the behavior of a body in such a way that repeats behaviors which have happened to coincide with bodies surviving and reproducing. I can think of some good reasons why a universe which is an eternal experience would want to dream up bodies and brains, but once I plug up all of the philosophical leaks of circular reasoning and begging the question, I can think of no plausible reason why an unconscious body or brain would or could dream.
All of the reasons that I have ever heard arise as post hoc justifications that betray an unscientific bias toward mechanism. In a way, the idea of mechanism as omnipotent is even more bizarre than the idea of an omnipotent deity, since the whole point of a mechanistic view of nature is to replace undefined omnipotence with robustly defined, rationally explained parts and powers. If we are just going to say that emergent phenomenal magic happens once the number of shapes or data relations is so large that we don’t want to deny any power to it, we are really just reinventing religious faith in an inverted form. It is to say that sufficiently complex computations transcend computation for reasons that transcend computation.
(JB) “The movie is a complete simulation, a VR that is running in the neocortex.”
CW – We have the experience of playing computer games using a video screen, so we conflate a computer program with a video screen’s ability to render visible shapes. In fact, it is our perceptual relationship with a video screen that doing the most critical part of the simulating. The computer by itself, without any device that can produce visible color and contrast, would not fool anyone. There’s no parsimonious or plausible way to justify giving the physical states of a computing machine aesthetic qualities unless we are expecting aesthetic qualities from the start. In that case, there is no honest way to call them mere computers.
(JB) “In that character, the brain writes our experiences, so we *feel* what it’s like to be exposed to the reward function. We feel what it’s like to be in our universe.”
Computer programs don’t need desires or rewards though. Programs are simply executed by physical force. Algorithms don’t need to serve a purpose, nor do they need to be enticed to serve a purpose. There’s no plausible, parsimonious reason for the brain to write its predictive algorithms or meta-algorithms as anything like a ‘feeling’ or sensation. All that is needed for a brain is to store some algorithmically compressed copy of its own brain state history. It wouldn’t need to “feel” or feel “what it’s like”, or feel what it’s like to “be in a universe”. These are all concepts that we’re smuggling in, post hoc, from our personal experience of feeling what it’s like to be in a universe.
(JB)” We don’t feel that we are a story because that is not very useful knowledge to have. Some people figure it out and they depersonalize. They start identifying with the mind itself or lose all identification.”
It’s easy to say that it’s not very useful knowledge if it doesn’t fit our theory, but we need to test for that bias scientifically. It might just be that people depersonalize or have negative results to the idea that they don’t really exist because it is false, and false in a way that is profoundly important. We may be as real as anything ever could be, and there may be no ‘simulation’ except via the power of imagination to make believe.
(JB) “The self is the thing that thinks that it remembers the contents of its attention. This is why we are conscious.”
CW – I don’t see a logical need for that. Attention need not logically facilitate any phenomenal properties. Attention can just as easily be purely behavioral, as can ‘memory’, or ‘models’. A mechanism can be triggered by groups of mechanisms acting simultaneously without any kind of semantic link defining one mechanism as a model for something else. Think of it this way: What if we wanted to build an AI without ANY phenomenal experience? We could build a social chameleon machine, a sociopath with no model of self at all, but instead a set of reflex behaviors that mimic those of others which are deemed to be useful for a given social transaction.
(JB) “A physical system cannot be conscious, only a simulation can be conscious.”
CW – I agree this is an improvement over the idea that physical systems are conscious. What would it mean for a ‘simulation’ to exist in the absence of consciousness though? A simulation implies some conscious audience which participates in believing or suspending disbelief in the reality of what is being presented. How would it be possible for a program to simulate part of itself as something other than another (invisible, unconscious) program?
(JB) “Consciousness is a simulated property of a simulated self.”
I turn that around 180 degrees. Consciousness is the sole absolutely authentic property. It is the base level sanity and sense that is required for all sense-making to function on top of. The self is the ‘skin in the game’ – the amplification of consciousness via the almost-absolutely realistic presentation of mortality.
KD – So in a way, Daniel Dennett is correct?
JB – Yes,[…] but the problem is that the things that he says are not wrong, but they are also not non-obvious. It’s valuable because there are no good or bad ideas. It’s a good idea if you comprehend it and it elevates your current understanding. In a way, ideas come in tiers. The value of an idea for the audience is if it’s a half tier above the audience. You and me have an illusion that we find objectively good ideas, because we work at the edge of our own understanding, but we cannot really appreciate ideas that are a couple of tiers above our own ideas. One tier is a new audience, two tiers means that we don’t understand the relevance of these ideas because we don’t have the ideas that we need to appreciate the new ideas. An idea appears to be great to us when we can stand right in its foothills and look at it. It doesn’t look great anymore when we stand on the peak of another idea and look down and realize the previous idea was just the foothills to that idea.
KD – Discusses the problems with the commercialization of academia and the negative effects it has on philosophy.
JB – Most of us never learn what it really means to understand, largely because our teachers don’t. There are two types of learning. One is you generalize over past examples, and we call that stereotyping if we’re in a bad mood. The other tells us how to generalize, and this is indoctrination. The problem with indoctrination is that it might break the chain of trust. If someone doesn’t check the epistemology of the people that came before them, and take their word as authority, that’s a big difficulty.
CW – I like the ideas of tiers because it confirms my suspicion that my ideas are two or three tiers above everyone else’s. That’s why y’all don’t get my stuff…I’m too far ahead of where you’re coming from. 🙂
1:07:00 Discussion about Ray Kurzweil, the difficulty in predicting timeline for AI, confidence, evidence, outdated claims and beliefs etc.
1:19 JB – The first stage of AI: Finding things that require intelligence to do, like playing chess and then implementing it as an algorithm. Manually engineering strategies for being intelligent in different domains. Didn’t scale up to General Intelligence
We’re now in the second phase of AI, building algorithms to discover algorithms. We build learning systems that approximate functions. He thinks deep learning should be called compositional function approximation. Using networks of many functions instead of tuning single regressions.
There could be a third phase of AI where we build meta-learning algorithms. Maybe our brains are meta-learning machines, not just learning stuff but learning ways of discovering how to learn stuff (for a new domain). At some point there will be no more phases and science will effectively end because there will be a general theory for global optimization with finite resources and all science will use that algorithm.
CW – I think that the more experience we gain with AI, the more we will see that it is limited in ways that we have not anticipated, and also that it is powerful in ways that we have not anticipated. I think that we will learn that intelligence as we know it cannot be simulated, however, in trying to simulate it, we will have developed something powerful, new, and interesting in its impersonal orthogonality to personal consciousness. The revolution may not be about the rise of computers becoming like people but of a rise in appreciation for the quality and richness of personal conscious experience in contrast to the impersonal services and simulations that AI delivers.
1:23 KD – Where does ethics fit, or does it?
JB – Ethics is often misunderstood. It’s not about being good or emulating a good person. Ethics emerges when you conceptualize the world as different agents, and yourself as one of them, and you share purposes with the other agents but you have conflicts of interest. If you think that you don’t share purposes with the other agents, if you’re just a lone wolf, and the others are your prey, there’s no reason for ethics – you only look for the consequences of your actions for yourself with respect for your own reward functions. It’s not ethics though – not a shared system of negotiation because only you matter, because you don’t share a purpose with the others.
KD – It’s not shared but it’s your personal ethical framework, isn’t it?
JB – It has to be personal. I decided not to eat meat because I felt that I shared a purpose with animal; the avoidance of suffering. I also realized that it is not mutual. Cows don’t care about my suffering. They don’t think about it a lot. I had to think about the suffering of cows so I decided to stop eating meat. That was an ethical decision. It’s a decision about how to resolve conflicts of interest under conditions of shared purpose. I think this is what ethics is about. It’s a rational process in which you negotiate with yourself and with others, the resolution of conflicts of interest under contexts of shared purpose. I can make decisions about what purposes we share. Some of them are sustainable and others are not – they lead to different outcomes. In a sense, ethics requires that you conceptualize yourself as something above the organism; that you identify with the systems of meanings above yourself so that you can share a purpose. Love is the discovery of shared purpose. There needs to be somebody you can love that you can be ethical with. At some level you need to love them. You need to share a purpose with them. Then you negotiate, you don’t want them all to fail in all regards, and yourself. This is what ethics is about. It’s computational too. Machines can be ethical if they share a purpose with us.
KD – Other considerations: Perhaps ethics can be a framework within which two entities that do not share interests can negotiate in and peacefully coexist, while still not sharing interests.
JB – Not interests but purposes. If you don’t share purposes then you are defecting against your own interests when you don’t act on your own interest. It doesn’t have integrity. You don’t share a purpose with your food, other than that you want it to be nice and edible. You don’t fall in love with your food, it doesn’t end well.
CW – I see this as a kind of game-theoretic view of ethics…which I think is itself (unintentionally) unethical I think it is true as far as it goes, but it makes assumptions about reality that are ultimately inaccurate as they begin by defining reality in the terms of a game. I think this automatically elevates the intellectual function and its objectivizing/controlling agendas at the expense of the aesthetic/empathetic priorities. What if reality is not a game? What if the goal is not to win by being a winner but to improve the quality of experience for everyone and to discover and create new ways of doing that?
Going back to JB’s initial comment that ethics are not about being good or emulating a good person, I’m not sure about that. I suspect that many people, especially children will be ethically shaped by encounters with someone, perhaps in the family or a character in a movie who appeals to them and who inspires imitation. Whether their appeal is as a saint or a sinner, something about their style, the way they communicate or demonstrate courage may align the personal consciousness with transpersonal ‘systems of meanings above’ themselves. It could be a negative example which someone encounters also. Someone that you hate who inspires you to embody the diametrically opposite aesthetics and ideals.
I don’t think that machines can be ethical or unethical, not because I think humans are special or better than machines, but out of simple parsimony. Machines don’t need ethics. They perform tasks, not for their own purposes, or for any purpose, but because we have used natural forces and properties to perform actions that satisfy our purposes. Try as we might (and I’m not even sure why we would want to try), I do not think that we will succeed in changing matter or computation into something which both can be controlled by us and which can generate its own purposes. I could be wrong, but I think this is a better reason to be skeptical of AI than any reason that computation gives us to be skeptical of consciousness. It also seems to me that the aesthetic power of a special person who exemplifies a particular set of ethics can be taken to be a symptom of a larger, absolute aesthetic power in divinity or in something like absolute truth. This doesn’t seem to fit the model of ethics as a game-theoretic strategy.
JB – Discussion about eating meat, offers example pro-argument that it could be said that a pasture raised cow could have a net positive life experience since they would not exist but for being raised as food. Their lives are good for them except for the last day, which is horrible, but usually horrible for everyone. Should we change ourselves or change cattle to make the situation more bearable? We don’t want to look at it because it is un-aesthetic. Ethics in a way is difficult.
KD – That’s the key point of ethics. It requires sometimes we make choices that are not in our own best interests perhaps.
JB – Depends what we define ourself. We could say that self is identical to the well being of the organism, but this is a very short-sighted perspective. I don’t actually identify all the way with my organism. There are other things – I identify with society, my kids, my relationships, my friends, their well being. I am all the things that I identify with and want to regulate in a particular way. My children are objectively more important than me. If I have to make a choice whether my kids survive or myself, my kids should survive. This is as it should be if nature has wired me up correctly. You can change the wiring, but this is also the weird thing about ethics. Ethics becomes very tricky to discuss once the reward function becomes mutable. When you are able to change what is important to you, what you care about, how do you define ethics?
CW – And yet, the reward function is mutable in many ways. Our experience in growing up seems to be marked by a changing appreciation for different kinds of things, even in deriving reward from controlling one’s own appetite for reward. The only constant that I see is in phenomenal experience itself. No matter how hedonistic or ascetic, how eternalist or existential, reward is defined by an expectation for a desired experience. If there is no experience that is promised, then there is no function for the concept of reward. Even in acts of self-sacrifice, we imagine that our action is justified by some improved experience for those who will survive after us.
KD – I think you can call it a code of conduct or a set of principles and rules that guide my behavior to accomplish certain kinds of outcomes.
JB – There are no beliefs without priors. What are the priors that you base your code of conduct on?
KD – The priors or axioms are things like diminishing suffering or taking an outside/universal view. When it comes to (me not eating meat), I take a view that is hopefully outside of me and the cows. I’m able to look at the suffering of eating a cow and their suffering of being eaten. If my prior is ‘minimize suffering’, because my test criteria of a sentient being is ‘can it suffer?’ , then minimizing suffering must be my guiding principle in how I relate to another entity. Basically, everything builds up from there.
JB – The most important part of becoming an adult is taking charge of your own emotions – realize that your emotions are generated by your own brain/organism, and that they are here to serve you. You’re not here to serve your emotions. They are here to help you do the things that you consider to be the right things. That means that you need to be able to control them, to have integrity. If you are just a victim of your emotions, and not do the things that you know are the right things, you don’t have integrity. What is suffering? Pain is the result of some part of your brain sending a teaching signal to another part of your brain to improve its performance. If the regulation is not correct, because you cannot actually regulate that particular thing, the pain signal will usually endure and increase until your brain figures it out and turns off the brain signaling center, because it’s not helping. In a sense suffering is a lack of integrity. The difficulty is only that many beings cannot get to the degree of integrity that they can control the application of learning signals in their brain…control the way that their reward function is computed and distributed.
CW – My criticism is the same as in the other examples. There’s no logical need for a program or machine to invent ‘pain’ or any other signal to train or teach. If there is a program to run an animal’s body, the program need only execute those functions which meet the criteria of the program. There’s no way for a machine to be punished or rewarded because there’s no reason for it to care about what it is doing. If anything, caring would impede optimal function. If a brain doesn’t need to feel to learn, then why would a brain’s simulation need to feel to learn?
KD – According to your view, suffering is a simulation or part of a simulation.
JB – Everything that we experience is a simulation. We are a simulation. To us it feels real. There is no getting around this. I have learned in my life that all of my suffering is a result of not being awake. Once I wake up, I realize what’s going on. I realize that I am a mind. The relevance of the signals that I perceive is completely up to the mind. The universe does not give me objectively good or bad things. The universe gives me a bunch of electrical impulses that manifest in my thalamus, and my brain makes sense of them by creating a simulated world. The valence in that simulated world is completely internal – it’s completely part of that world, it’s not objective…and I can control this.
KD – So you are saying suffering is subjective?
JB – Suffering is real to the self with respect to ethics, but it is not immutable. You can change the definition of your self, the things that you identify with. We don’t have to suffer about things, political situations for example, if we recognize them to be mechanical processes that happen regardless of how we feel about them.
CW – The problem with the idea of simulation is that we are picking and choosing which features of our experience are more isomorphic to what we assume is an unsimulated reality. Such an assumption is invariably a product of our biases. If we say that the world we experience is a simulation running on a brain, why not also say that the brain is also a simulation running on something else? Why not say that our experiences of success with manipulating our own experience of suffering is as much of a simulation as the original suffering was? At some point, something has to genuinely sense something. We should not assume that just because our perception can be manipulated we have used manipulation to escape from perception. We may perceive that we have escaped one level of perception, or objectified it, but this too must be presumed to be part of the simulation as well. Perception can only seem to have been escaped in another perception. The primacy of experience is always conserved.
I think that it is the intellect that is over-valuing the significance of ‘real’ because of its role in protecting the ego and the physical body from harm, but outside of this evolutionary warping, there is no reason to suspect that the universe distinguishes in an absolute sense between ‘real’ and ‘unreal’. There are presentations – sights, sounds, thoughts, feelings, objects, concepts, etc, but the realism of those presentations can only be made of the same types of perceptions. We see this in dreams, with false awakenings etc. Our dream has no problem with spontaneously confabulating experiences of waking up into ‘reality’. This is not to discount the authenticity of waking up in ‘actual reality’, only to say that if we can tell that it authentic, then it necessarily means that our experience is not detached from reality completely and is not meaningfully described as a simulation. There are some recent studies that suggest that our perception may be much closer to ‘reality’ than we thought, i.e. that we can train ourselves to perceive quantum level changes.
If that holds up, we need to re-think the idea that it would make sense for a bio-computer to model or simulate a phenomenal reality that is so isomorphic and redundant to the unperceived reality. There’s not much point in a 1 to 1 scale model. Why not just put the visible photons inside the visual cortex in exactly the field that we see? I think that something else is going on. There may not be a simulation, only a perceptual lensing between many different concurrent layers of experience – not a dualism or dual-aspect monism, but a variable aspect monism. We happen to be a very, very complex experience which includes the capacity to perceive aspects of its own perception in an indirect or involuted rendering.
KD – Stoic philosophy says that we suffer not from events or things that happen in our lives, but from the stories that we attach to them. If you change the story, you can change the way you feel about them and reduce suffering. Let go of things we can’t really control, body, health, etc. The only thing you can completely control is your thoughts. That’s where your freedom and power come to be. In that mind, in that simulation, you’re the God.
JB – This ability to make your thoughts more truthful, this is Western enlightenment in a way is aufklärung in German. There is also this other sense of enlightenment, erleuchtung that you have in a spiritual context. So aufklärung fixes your rationality and erleuchtung fixes your motivation. It fixes what’s relevant to you and your relationship between self and the universe. Often they are seen as mutually exclusive, in the sense thataufklärung leads to nihilism, because you don’t give up your need for meaning, you just prove that it cannot be satisfied. God does not exist in any way that can set you free. In this other sense, you give up your understanding of how the world actually works so that you can be happy. You go down to a state where all people share the same cosmic consciousness, which is complete bullshit, right? But it’s something that removes the illusion of separation and the suffering that comes with the separation. It’s unsustainable.
CW – This duality of aufklärung and erleuchtung I see as another expression of the polarity of the universal continuum of consciousness. Consciousness vs machine, East vs West, Wisdom vs Intelligence. I see both extremes as having pathological tendencies. The Western extreme is cynical, nihilistic, and rigid. The Eastern extreme is naïve, impractical, and delusional. Cosmic consciousness or God does not have to be complete bullshit, but it can be a hint of ways to align ourselves and bring about more positive future experiences, both personally and or transpersonally.
Basically, I think that both the brain and the dreamer of the brain are themselves part of a larger dream that may or may not be like a dreamer. It may be that these possibilities are in participatory superposition, like an ambiguous image, so that what we choose to invest our attention in can actually bias experienced outcomes toward a teleological or non-teleological absolute. Maybe our efforts to could result in the opposite effect also, or some combination of the two. If the universe consists of dreams and dreamed dreamers, then it is possible for our personal experience to include a destiny where we believe one thing about the final dream and find out we were wrong, or right, or wrong then right then wrong again, etc. forever.
KD – Where does that leave us with respect to ethics though? Did you dismantle my ethics, the suffering test?
JB – Yeah, it’s not good. The ethic of eliminating suffering leads us to eliminating all life eventually. Anti-natalism – stop bringing organisms into the world to eliminate suffering, end the lives of those organisms that are already here as painlessly as possible, is this what you want?
KD – (No) So what’s your ethics?
JB – Existence is basically neutral. Why are there so few stoics around? It seems so obvious – only worry about things to the extent that worrying helps you change them…so why is almost nobody a Stoic?
KD – There are some Stoics and they are very inspirational.
JB – I suspect that Stoicism is maladaptive. Most cats I have known are Stoics. If you leave them alone, they’re fine. Their baseline state is ok, they are ok with themselves and their place in the universe, and they just stay in that place. If they are hungry or want to play, they will do the minimum that they have to do to get back into their equilibrium. Human beings are different. When they get up in the morning they’re not completely fine. They need to be busy during the day, but in the evening they feel fine. In the evening they have done enough to make peace with their existence again. They can have a beer and be with their friends and everything is good. Then there are some individuals which have so much discontent within themselves that they can’t take care of it in a single day. From an evolutionary perspective, you can see how this would be adaptive for a group oriented species. Cats are not group oriented. For them, it’s rational to be a Stoic. If you are a group animal, it makes sense for individuals to overextend themselves for the good of the group – to generate a surplus of resources for the group.
CW – I don’t know if we can generalize about humans that way. Some people are more like cats. I will say that I think it is possible to become attached to non-attachment. The stoic may learn to disassociate from the suffering of life, but this too can become a crutch or ‘spiritual bypass’.
KD – But evolution also diversifies things. Evolution hedges its bets by creating diversity, so some individuals will be more adaptive to some situations than others.
JB – That may not be true. In larger habitats we don’t find more species in them. Competition is more fierce. We reduce the number of species dramatically. We are probably eventually going to look like a meteor as far as obliterating species on this planet.
KD – So what does that mean for ethics in technology? What’s the solution? Is there room for ethics in technology?
JB – Of course. It’s about discovering the long game. You have to look at the long term influences and you also have to question why you think it’s the right thing to do, what the results of that are, which gets tricky.
CW – I think that all that we can do is to experiment and be open to the possibilities that our experiments themselves may be right or wrong. There may be no way of letting ourselves off the hook here. We have to play the game as players with skin in the game, not as safe observers studying only those rules that we have invested in already.
KD – We can agree on that, but how do you define ethics yourself?
JB – There are some people in AI who think that ethics are a way for politically savvy people to get power over STEM people…and with considerable success. It’s largely a protection racket. Ethical studies are relatable and so make a big splash, but it would rarely happen that a self-driving car would have to make those decisions. My best answer of how I define ethics myself is that it is the principled negotiation of conflicts of interest under conditions of shared purpose. When I look at other people, I mostly imagine myself as being them in a different timeline. Everyone is in a way me on a different timeline, but in order to understand them I need to flip a number of bits. These bits are the conditions of negotiation that I have with you.
KD – Where to cows fit in? We don’t have a shared purpose with them. Can you have shared purpose with respect to the cows then?
JB – The shared purpose doesn’t objectively exist. You basically project a shared meaning above the level of the ego. The ego is the function that integrates expected rewards over the next fifty years.
KD – That’s what Peter Singer calls the Universe point of view, perhaps.
JB – If you can go to this Eternalist perspective where you integrate expected reward from here to infinity, most of that being outside of the universe, this leads to very weird things. Most of my friends are Eternalists. All these Romantic Russian Jews, they are like that, in a way. This Eastern European shape of the soul. It creates something like a conspiracy, it creates a tribe, and its very useful for corporations. Shared meaning is a very important thing for a corporation that is not transactional. But there is a certain kind of illusion in it. To me, meaning is like the Ring of Mordor. If you drop the ring, you will lose the brotherhood of the ring and you will lose your mission. You have to carry it, but very lightly. If you put it on, you will get super powers but you get corrupted because there is no meaning. You get drawn into a cult that you create…and I don’t want to do that…because it’s going to shackle my mind in ways that I don’t want it to be bound.
CW – I agree it is important not to get drawn into a cult that we create, however, what I have found is that the drive to negate superstition tends toward its own cult of ‘substitution’. Rather than the universe being a divine conspiracy, the physical universe is completely innocent of any deception, except somehow for our conscious experience, which is completely deceptive, even to the point of pretending to exist. How can there be a thing which is so unreal that it is not even a thing, and yet come from a universe that is completely real and only does real things?
KD – I really like that way of seeing but I’m trying to extrapolate from your definition of ethics a guide of how we can treat the cows and hopefully how the AIs can treat us.
JB – I think that some people have this idea that is similar to Asimov, that at some point the Roombas will become larger and more powerful so that we can make them washing machines, or let them do our shopping, or nursing…that we will still enslave them but negotiate conditions of co-existence. I think that what is going to happen instead is that corporations, which are already intelligent agents that just happen to borrow human intelligence, automate their decision making. At the moment, a human being can often outsmart a corporation, because the corporation has so much time in between updating its Excel spreadsheets and the next weekly meetings. Imagine it automates and weekly meetings take place every millisecond, and the thing becomes sentient and understands its role in the world, and the nature of physics and everything else. We will not be able to outsmart that anymore, and well will not live next to it, we will live inside of it. AI will come from top down on us. We will be its gut flora. The question is how we can negotiate that it doesn’t get the idea to use antibiotics, because we’re actually not good for anything.
KD – Exactly. And why wouldn’t they do that?
JB – I don’t see why.
CW – The other possibility is that AI will not develop its own agendas or true intelligence. That doesn’t mean our AI won’t be dangerous, I just suspect that the danger will come from our misinterpreting the authority of a simulated intelligence rather than from a genuine mechanical sentience.
KD – Is there an ethics that could guide them to treat us just like you decided to treat the cows when you decided not to eat meat?
JB – Probably no way to guarantee all AIs would treat us kindly. If we used the axiom of reducing suffering to build an AI that will be around for 10,000 years and keep us around too, it will probably kill 90% of the people painlessly and breed the rest into some kind of harmless yeast. This is not what you want, even though it would be consistent with your stated axioms. It would also open a Pandora’s Box to wake up as many people as possible so that they will be able to learn how to stop their suffering.
KD – Wrapping up
JB – Discusses book he’s writing about how AI has discovered ways of understanding the self and consciousness which we did not have 100 years ago. The nature of meaning, how we actually work, etc. The field of AI is largely misunderstood. It is different from the hype, largely is in a way, statistics on steroids. It’s identifying new functions to model reality. It’s largely experimental and has not gotten to the state where it can offer proofs of optimality. It can do things in ways that are much better than the established rules of statisticians. There is also going to be a convergence between econometrics, causal dependency analysis, and AI, and statistics. It’s all going to be the same in a particular way, because there’s only so many ways that you can make mathematics about reality. We confuse this with the idea of what a mind is. They’re closely related. I think that our brain contains an AI that is making a model of reality and a model of a person in reality, and this particular solution of what a particular AI can do in the modeling space is what we are. So in a way we need to understand the nature of AI, which I think is the nature of sufficiently general function approximation, maybe all the truth that can be found by an embedded observer, in particular kinds of universes that have the power to create it. This could be the question of what AI is about, how modeling works in general. For us the relevance of AI is how does it explain who we are. I don’t think there is anything else that can.
CW – I agree that AI development is the next necessary step to understanding ourselves, but I think that we will be surprised to find that General Intelligence cannot be simulated and that this will lead us to ask the deeper questions about authenticity and irreducibly aesthetic properties.
KD – So by creating AI, we can perhaps understand the AI that is already in our brain.
JB – We already do. Minsky and many others who have contributed to this field are already better ideas than anything that we had 200 years ago. We could only develop many of these ideas because we began to understand the nature of modeling – the status of reality.
The nature of our relationship to the outside world. We started out with this dualistic intuition in our culture, that there is a thinking substance (Res Cogitans) and an extended substance (Res Extensa)…stuff in space universe and a universe of ideas. We now realize that they both exist, but they both exist within the mind. We understand that everything perceptual gets mapped to a region in three space, but we also understand that physics is not a three space, it’s something else entirely. The three space exists only as a potential of electromagnetic interactions at a certain order of magnitude above the Planck length where we are entangled with the universe. This is what we model, and this looks three dimensional to us.
CW – I am sympathetic to this view, however, I suggest an entirely different possibility. Rather than invoking a dualism of existing in the universe and existing ‘in the mind’, I see that existence itself is an irreducibly perceptual-participatory phenomenon. Our sense of dualism may actually reveal more insights into our deeper reality than those insights which assume that tangible objects and information exist beyond all perception. The more we understand about things like quantum contextuality and relativity, I think the more we have to let go of the compulsion to label things that are inconvenient to explain as illusions. I see Res Cogitans and Res Extensa as opposite poles of a Res Aesthetica continuum which is absolute and eternal. It is through the modulation of aesthetic lensing that the continuum is diffracted into various modalities of sense experience. The cogitans of software and the extensa of hardware can never meet except through the mid-range spectrum of perception. It is from that fertile center, I suspect, that most of the novelty and richness of the universe is generated, not from sterile algorithms or game-theoretic statistics on the continuum’s lensed peripheries.
Everything else we come up with that cannot be mapped to three space is Res Cogitans. If we transfer this dualism into a single mind then we have the idealistic monism that we have in various spiritual teachings – this idea that there is no physical reality, that we live in a dream. We are characters dreamed by a mind on a higher plane of existence and that’s why miracles are possible. Then there is this Western perspective of a mechanical universe. It’s entirely mechanical, there’s no conspiracy going on. Now we understand that these things are not in opposition, they’re complements. We actually do live in a dream but the dream is generated by our neocortex. Our brain is not a machine that can give us access to reality as it is, because that’s not possible for a system that is only measuring a few bits at a systemic interface. There are no colors and sounds on Earth. We already know that.
CW – Why stop at colors and sounds though? How can we arbitrarily say that there is an Earth or a brain when we know that it is only a world simulated by some kind of code. If we unravel ourselves into evolution, why not keep going and unravel evolution as well? Maybe colors and sounds are a more insightful and true reflection of what nature is made of than the blind measurements that we take second hand through physical instruments? It seems clear to me that this is a bias which has not yet properly appreciated the hints of relativity and quantum contextuality. If we say that physics has no frame of reference, then we have to understand that we may be making up an artificial frame of reference that seems to us like no frame of reference. If we live in a dream, then so does the neocortex. Maybe they are different dreams, but there is no sound scientific reason to privilege every dream in the universe except our own as real.
The sounds and colors are generated as a dream inside your brain. The same circuits that make dreams during the night make dreams during the day. This is in a way our inner reality that’s being created on a brain. The mind on a higher plane of existence exists, it’s a brain of a primate that’s made of cells and lives in a mechanical physical universe. Magic is possible because you can edit your memories. You can make that simulation anything that you want it to be. Many of these changes are not sustainable, which is why the sages warn against using magic(k), because if down the line, if you change your reward function, bad things may happen. You cannot break the bank.
KD – To simplify all of this, we need to understand the nature of AI to understand ourselves.
JB – Yeah, well, I would say that AI is the field that took up the slack after psychology failed as a science. Psychology got terrified of overfitting, so it stopped making theories of the mind as a whole, it restricted itself to theories with very few free parameters so it could test them. Even those didn’t replicate, as we know now. After Piaget, psychology largely didn’t go anywhere, in my perspective. It might be too harsh because I see it from the outside, and outsiders of AI might argue that AI didn’t go very far, and as an insider I’m more partial here.
CW – It seems to me that psychology ran up against a barrier that is analogous to Gödel’s incompleteness. To go on trying to objectify subjectivity necessarily brings into question the tools of formalism themselves. I think that it may have been that transpersonal psychology had come too far too fast, and that there is still more to be done for the rest of our scientific establishment to catch up. Popular society is literally not yet sane enough to handle a deep understanding of sanity.
KD – I have this metaphor that I use every once in a while, saying that technology is a magnifying mirror. It doesn’t have an essence of its own but it reflects the essences that we put in it. It’s not a perfect image because it magnifies and amplifies things. That seems to go well with the idea that we have to understand the nature of AI to understand who we are.
JB – The practice of AI is 90% automation of statistics and making better statistics that run automatically on machines. It just so happens that this is largely co-extensional with what minds do. It also so happens that AI was founded by people like Minsky who had fundamental questions about reality.
KD – And what’s the last 10%?
JB – The rest is people come up with dreams about our relationship to reality, using our concepts that we develop in AI. We identify models that we can apply in other fields. It’s the deeper insights. It’s why we do it – to understand. It’s to make philosophy better. Society still needs a few of us to think about the deep questions, and we are still here, and the coffee is good.
CW – Thanks for taking the time to put out quality discussions like this. I agree that technology is a neutral reflector/magnifier of what we put into it, but I think that part of what we have to confront as individuals and as a society is that neutrality may not be enough. We may now have to decide whether we will make a stand for authentic feeling and significance or to rely on technology which does not feel or understand significance to make that decision for us.
Joscha Bach: We need to understand the nature of AI to understand who we are
This is a great, two hour interview between Joscha Bach and Nikola Danaylov (aka Socrates): https://www.singularityweblog.com/joscha-bach/
Below is a partial (and paraphrased) transcription of the first hour, interspersed with my comments. I intend to do the second hour soon.
00:00 – 10:00 Personal background & Introduction
Please watch or listen to the podcast as there is a lot that is omitted here. I’m focusing on only the parts of the conversation which are directly related to what I want to talk about.
6:08 Joscha Bach – Our null hypothesis from Western philosophy still seems to be supernatural beings, dualism, etc. This is why many reject AI as ridiculous and unlikely – not because they don’t see that we are biological computers and that the universe is probably mechanical (mechanical theory gives good predictions), but because deep down we still have the null hypothesis that the universe is somehow supernatural and we are the most supernatural things in it. Science has been pushing back, but in this area we have not accepted it yet.
6:56 Nikola Danaylov – Are we machines/algorithms?
JB – Organisms have algorithms and are definitely machines. An algorithm is a set of rules that can be probabilistic or deterministic, and make it possible to change representational states in order to compute a function. A machine is a system that can change states in non-random ways, and also revisit earlier states (stay in a particular state space, potentially making it a system). A system can be described by drawing a fence around its state space.
CW – We should keep in mind that computer science itself begins with a set of assumptions which are abstract and rational (representational ‘states’, ‘compute’, ‘function’) rather than concrete and empirical. What is required for a ‘state’ to exist? What is the minimum essential property that could allow states to be ‘represented’ as other states? How does presentation work in the first place? Can either presentation or representation exist without some super-physical capacity for sense and sense-making? I don’t think that it can.
This becomes important as we scale up from the elemental level to AI since if we have already assumed that an electrical charge or mechanical motion carries a capacity for sense and sense-making, we are committing the fallacy of begging the question if carry that assumption over to complex mechanical systems. If we don’t assume any sensing or sense-making on the elemental level, then we have the hard problem of consciousness…an explanatory gap between complex objects moving blindly in public space to aesthetically and semantically rendered phenomenal experiences.
I think that if we are going to meaningfully refer to ‘states’ as physical, then we should err on the conservative side and think only in terms of those uncontroversially physical properties such as location, size, shape, and motion. Even concepts such as charge, mass, force, and field can be reduced to variations in the way that objects or particles move.
Representation, however, is semiotic. It requires some kind of abstract conceptual link between two states (abstract/intangible or concrete/tangible) which is consciously used as a ‘sign’ or ‘signal’ to re-present the other. This conceptual link cannot be concrete or tangible. Physical structures can be linked to one another, but that link has to be physical, not representational. For one physical shape or substance to influence another they have to be causally engaged by proximity or entanglement. If we assume that a structure is able to carry semantic information such as ‘models’ or purposes, we can’t call that structure ‘physical’ without making an unscientific assumption. In a purely physical or mechanical world, any representation would be redundant and implausible by Occam’s Razor. A self-driving car wouldn’t need a dashboard. I call this the “Hard Problem of Signaling”. There is an explanatory gap between probabilistic/deterministic state changes and the application of any semantic significance to them or their relation. Semantics are only usable if a system can be overridden by something like awareness and intention. Without that, there need not be any decoding of physical events into signs or meanings, the physical events themselves are doing all that is required.
10:00 – 20:00
JB – [Talking about art and life], “The arts are the cuckoo child of life.” Life is about evolution, which is about eating and getting eaten by monsters. If evolution reaches its global optimum, it will be the perfect devourer. Able to digest anything and turn it into a structure to perpetuate itself, as long as the local puddle of negentropy is available. Fascism is a mode of organization of society where the individual is a cell in a super-organism, and the value of the individual is exactly its contribution to the super-organism. When the contribution is negative, then the super-organism kills it. It’s a competition against other super-organisms that is totally brutal. [He doesn’t like Fascism because it’s going to kill a lot of minds he likes :)].
12:46 – 14:12 JB – The arts are slightly different. They are a mutation that is arguably not completely adaptive. People fall in love with their mental representation/modeling function and try to capture their conscious state for its own sake. An artist eats to make art. A normal person makes art to eat. Scientists can be like artists also in that way. For a brief moment in the universe there are planetary surfaces and negentropy gradients that allow for the creation of structure and some brief flashes of consciousness in the vast darkness. In these brief flashes of consciousness it can reflect the universe and maybe even figure out what it is. It’s the only chance that we have.
CW – If nature were purely mechanical, and conscious states are purely statistical hierarchies, why would any such process fall in love with itself?
JB – [Mentions global warming and how we may have been locked into this doomed trajectory since the industrial revolution. Talks about the problems of academic philosophy where practical concerns of having a career constrict the opportunities to contribute to philosophy except in a nearly insignificant way].
KD – How do you define philosophy?
CW – I thought of nature this way for many years, but I eventually became curious about a different hypothesis. Suppose we invert our the foreground/background relationship of conscious experience and existence that we assume. While silicon atoms and galaxies don’t seem conscious to us, the way that our consciousness renders them may reflect more their unfamiliarity and distance from our own scale of perception. Even just speeding up or slowing down these material structures would make their status as unconscious or non-living a bit more questionable. If a person’s body grew in a geological timescale rather than a zoological timescale, we might have a hard time seeing them as alive or conscious.
Rather than presuming a uniform, universal timescale for all events, it is possible that time is a quality which does not exist only as an experienced relation between experiences, and which contracts and dilates relative to the quality of that experience and the relation between all experiences. We get a hint of this possibility when we notice that time seems to crawl or fly by in relation to our level of enjoyment of that time. Five seconds of hard exercise can seem like several minutes of normal-baseline experience, while two hours in good conversation can seem to slip away in a matter of 30 baseline minutes. Dreams give us another glimpse into timescale relativity, as some dreams can be experienced as going on for an arbitrarily long time, complete with long term memories that appear to have been spontaneously confabulated upon waking.
When we assume a uniform universal timescale, we may be cheating ourselves out of our own significance. It’s like a political map of the United States, where geographically it appears that almost the entire country votes ‘red’. We have to distort the geography of the map to honor the significance of population density, and when we do, the picture is much more balanced.
The universe of course is unimaginably vast and ancient *in our frame and rate of perception* but that does not mean that this sense of vastness of scale and duration would be conserved in the absence of frames of perception that are much smaller and briefer by comparison. It may be that the entire first five billion (human) years were a perceived event that is comparable to one of our years in its own (native) frame. There were no tiny creatures living on the surfaces of planets to define the stars as moving slowly, so that period of time, if it was rendered aesthetically at all, may have been rendered as something more like music or emotions than visible objects in space.
Carrying this over to the art vs evolution context, when we adjust the geographic map of cosmological time, the entire universe becomes an experience with varying degrees and qualities of awareness. Rather than vast eons of boring patterns, there would be more of a balance between novelty and repetition. It may be that the grand thesis of the universe is art instead of mechanism, but it may use a modulation between the thesis (art) and antithesis (mechanism) to achieve a phenomenon which is perpetually hungry for itself. The fascist dinosaurs don’t always win. Sometimes the furry mammals inherit the Earth. I don’t think we can rule out the idea that nature is art, even though it is a challenging masterpiece of art which masks and inverts its artistic nature for contrasting effects. It may be the case that our lifespans put our experience closer to the mechanistic grain of the canvas and that seeing the significance of the totality would require a much longer window of perception.
There are empirical hints within our own experience which can help us understand why consciousness rather than mechanism is the absolute thesis. For example, while brightness and darkness are superficially seen as opposites, they are both visible sights. There is no darkness but an interruption of sight/brightness. There is no silence but a period of hearing between sounds. No nothingness but a localized absence of somethings. In this model of nature, there would be a background super-thesis which is not a pre-big-bang nothingness, but rather closer to the opposite; a boundaryless totality of experience which fractures and reunites itself in ever more complex ways. Like the growth of a brain from a single cell, the universal experience seems to generate more using themes of dialectic modulation of aesthetic qualities.
Astrophysics appears as the first antithesis to the super-thesis – a radically diminished palette of mathematical geometries and deterministic/probabilistic transactions.
Geochemistry recapitulates and opposes astrophysics, with its palette of solids, liquids, gas, metallic conductors and glass-like insulators, animating geometry into fluid-dynamic condensations and sedimented worlds.
The next layer, Biogenetic realm precipitates as of synthesis between the dialectic of properties given by solids, liquids, and gas; hydrocarbons and amino polypeptides.
Cells appear as a kind of recapitulation of the big bang – something that is not just a story about the universe, but about a micro-universe struggling in opposition to a surrounding universe.
Multi-cellular organisms sort of turn the cell topology inside out, and then vertebrates recapitulate one kind of marine organism within a bony, muscular, hair-skinned terrestrial organism.
The human experience recapitulates all of the previous/concurrent levels, as both a zoological>biological>organic>geochemical>astrophysical structure and the subjective antithesis…a fugue of intangible feelings, thoughts, sensations, memories, ideas, hopes, dreams, etc that run orthogonal to the life of the body, as a direct participant as well as a detached observer. There are many metaphors from mystical traditions that hint at this self-similar, dialectic diffraction. The mandala, the labyrinth, the Kabbalistic concept of tzimtzum, the Taijitu symbol, Net of Indra etc. The use of stained glass in the great European cathedral windows is particularly rich symbolically, as it uses the physical matter of the window as explicitly negative filter – subtracting from or masking the unity of sunlight.
This is in direct opposition to the mechanistic view of brain as collection of cells that somehow generate hallucinatory models or simulations of unexperienced physical states. There are serious problems with this view. The binding problem, the hard problem, Loschmidt’s paradox (the problem of initial negentropy in a thermodynamically closed universe of increasing entropy), to name three. In the diffractive-experiential view that I suggest, it is emptiness and isolation which are like the leaded boundaries between the colored panes of glass of the Rose Window. Appearances of entropy and nothingness become the locally useful antithesis to the super-thesis holos, which is the absolute fullness of experience and novelty. Our human subjectivity is only one complex example of how experience is braided and looped within itself…a kind of turducken of dialectically diffracted experiential labyrinths nested within each other – not just spatially and temporally, but qualitatively and aesthetically.
If I am modeling Joscha’s view correctly, he might say that this model is simply a kind of psychological test pattern – a way that the simulation that we experience as ourselves exposes its early architecture to itself. He might say this is a feature/bug of my Russian-Jewish mind ;). To that, I say perhaps, but there are some hints that it may be more universal:
Special Relativity
Quantum Mechanics
Gödel’s Incompleteness
These have revolutionized our picture of the world precisely because they point to a fundamental nature of matter and math as plastic and participatory…transformative as well as formal. Add to that the appearance of novelty…idiopathic presentations of color and pattern, human personhood, historical zeitgeists, food, music, etc. The universe is not merely regurgitating its own noise in ever more tedious ways, it is constantly reinventing reinvention. As nothingness can only be a gap between somethings, so too can generic, repeating pattern variations only be a multiplication of utterly novel and unique patterns. The universe must be creative and utterly improbable before it can become deterministic and probabilistic. It must be something that creates rules before it can follow them.
Joscha’s existential pessimism may be true locally, but that may be a necessary appearance; a kind of gravitational fee that all experiences have to pay to support the magnificence of the totality.
20:00 – 30:00
JB – Philosophy is, in a way, the search for the global optimum of the modeling function. Epistemology – what can be known, what is truth; Ontology – what is the stuff that exists, Metaphysics – the systems that we have to describe things; Ethics – What should we do? The first rule of rational epistemology was discovered by Francis Bacon in 1620 “The strengths of your confidence in your belief must equal the weight of the evidence in support of it.”. You must apply that recursively, until you resolve the priors of every belief and your belief system becomes self contained. To believe stops being a verb. There is no more relationships to identifications that you arbitrarily set. It’s a mathematical, axiomatic system. Mathematics is the basis of all languages, not just the natural languages.
CW – Re: Language, what about imitation and gesture? They don’t seem meaningfully mathematical.
Hilbert stumbled on problems with infinities, with set theory revealing infinite sets that contains themselves and all of its subsets, so that they don’t have the same number of members as themselves. He asked mathematicians to build an interpreter or computer made from any mathematics that can run all of mathematics. Godel and Turing showed this was not possible, and that the computer would crash. Mathematics is still reeling from this shock. They figured out that all universal computers have the same power. They use a set of rules that contains itself and can compute anything that can be computed, as well as any/all universal computers.
They then figured out that our minds are probably in the class of universal computers, not in the class of mathematical systems. Penrose doesn’t know [or agree with?] this and thinks that our minds are mathematical but can do things that computers cannot do. The big hypothesis of AI in a way is that we are in the class of systems that can approximate computable functions, and only those…we cannot do more than computers. We need computational languages rather than mathematical languages, because math languages use non-computable infinities. We want finite steps for practical reasons that you know the number of steps. You cannot know the last digit of Pi, so it should be defined as a function rather than a number.
KD – What about Stephen Wolfram’s claims that our mathematics is only one of a very wide spectrum of possible mathematics?
JB – Metamathematics isn’t different from mathematics. Computational mathematics that he uses in writing code is Constructive mathematics; branch of mathematics that has been around for a long time, but was ignored by other mathematicians for not being powerful enough. Geometries and physics require continuous operations…infinities and can only be approximated within computational mathematics. In a computational universe you can only approximate continuous operators by taking a very large set of finite automata, making a series from them, and then squint (?) haha.
27:00 KD – Talking about the commercialization of knowledge in philosophy and academia. The uselessness/impracticality of philosophy and art was part of its value. Oscar Wilde defined art as something that’s not immediately useful. Should we waste time on ideas that look utterly useless?
JB – Feynman said that physics is like sex. Sometimes something useful comes from it, but it’s not why we do it. Utility of art is orthogonal to why you do it. The actual meaning of art is to capture a conscious state. In some sense, philosophy is at the root of all this. This is reflected in one of the founding myths of our civilization; The Tower of Babel. The attempt to build this cathedral. Not a material building but metaphysical building because it’s meant to reach the Heavens. A giant machine that is meant to understand reality. You get to this machine, this Truth God by using people that work like ants and contribute to this.
CW – Reminds me of the Pillar of Caterpillars story “Hope for the Flowers” http://www.chinadevpeds.com/resources/Hope%20for%20the%20Flowers.pdf
30:00 – 40:00
JB – The individual toils and sacrifices for something that doesn’t give them any direct reward or care about them. It’s really just a machine/computer. It’s an AI. A system that is able to make sense of the world. People had to give up on this because the project became too large and the efforts became too specialized and the parts didn’t fit together. It fell apart because they couldn’t synchronize their languages.
The Roman Empire couldn’t fix their incentives for governance. They turned their society into a cult and burned down their epistemology. They killed those whose thinking was too rational and rejected religious authority (i.e. talking to a burning bush shouldn’t have a case for determining the origins of the universe). We still haven’t recovered from that. The cultists won.
CW – It is important to understand not just that the cultists won, but why they won. Why was the irrational myth more passionately appealing to more people than the rational inquiry? I think this is a critical lesson. While the particulars of the religious doctrine were irrational, they may have exposed a transrational foundation which was being suppressed. Because this foundation has more direct access to the inflection point between emotion and participatory action, it gave those who used it more access to their own reward function. Groups could leverage the power of self-sacrifice as a virtue, and of demonizing archetypes to reverse their empathy against enemies of the holy cause. It’s similar to how the advertising revolution of the 20thcentury (See documentary Century of the Self ) used Freudian concepts of the subconscious to exploit the irrational, egocentric urges beneath the threshold of the customer’s critical thinking. Advertisers stopped appealing to their audience with dry lists of claimed benefits of their products and instead learned to use images and music to subliminally reference sexuality and status seeking.
I think Joscha might say this is a bug of biological evolution, which I would agree with, however, that doesn’t mean that the bug doesn’t reflect the higher cosmological significance of aesthetic-participatory phenomena. It may be the case that this significance must be honored and understood eventually in any search for ultimate truth. When the Tower of Babel failed to recognize the limitation of the outside-in view, and moved further and further from the unifying aesthetic-participatory foundation, it had to disintegrate. The same fate may await capitalism and AI. The intellect seeks maximum divorce from its origin in conscious experience for a time, before the dialectic momentum swings back (or forward) in the other direction.
To think is to abstract – to begin from an artificial nothingness and impose an abstract thought symbol on it. Thinking uses a mode of sense experience which is aesthetically transparent. It can be a dangerous tool because unlike the explicitly aesthetic senses which are rooted directly in the totality of experience, thinking is rooted in its own isolated axioms and language, a voyeur modality of nearly unsensed sense-making. Abstraction of thought is completely incomplete – a Baudrillardian simulacra, a copy with no original. This is what the Liar’s Paradox is secretly showing us. No proposition of language is authentically true or false, they are just strings of symbols that can be strung together in arbitrary and artificial ways. Like an Escher drawing of realistic looking worlds that suggest impossible shapes, language is only a vehicle for meaning, not a source of it. Words have no authority in and of themselves to make claims of truth or falsehood. That can only come through conscious interpretation. A machine need not be grounded in any reality at all. It need not interpret or decode symbols into messages, it need only *act* in mechanical response to externally sourced changes to its own physical states.
This is the soulless soul of mechanism…the art of evacuation. Other modes of sense delight in concealing as well as revealing deep connection with all experience, but they retain an unbroken thread to the source. They are part of the single labyrinth, with one entrance and one exit and no dead ends. If my view is on the right track, we may go through hell, but we always get back to heaven eventually because heaven is unbounded consciousness, and that’s what the labyrinth of subjectivity is made of. When we build a model of the labyrinth of consciousness from the blueprints reflected only in our intellectual/logical sense channel, we can get a maze instead of a labyrinth. Dead ends multiply. New exits have to be opened up manually to patch up the traps, faster and faster. This is what is happening in enterprise scale networks now. Our gains in speed and reliability of computer hardware are being constantly eaten away by the need for more security, monitoring, meta-monitoring, real-time data mining, etc. Software updates, even to primitive BIOS and firmware have become so continuous and disruptive that they require far more overhead than the threats they are supposed to defend against.
JB – The beginnings of the cathedral for understanding the universe by the Greeks and Romans had been burned down by the Catholics. It was later rebuilt, but mostly in their likeness because they didn’t get the foundations right. This still scars our civilization.
KD – Does this Tower of Babel overspecialization put our civilization at risk now?
JB – Individuals don’t really know what they are doing. They can succeed but don’t really understand. Generations get dumber as they get more of their knowledge second-hand. People believe things collectively that wouldn’t make sense if people really thought about it. Conspiracy theories. Local indoctrinations and biases pit generations against each other. Civilizations/hive minds are smarter than us. We can make out the rough shape of a Civilization Intellect but can’t make sense of it. One of the achievements of AI will be to incorporate this sum of all knowledge and make sense of it all.
KD – What does the self-inflicted destruction of civilizations tell us about the fitness function of Civilization Intelligence?
JB – Before the industrial revolution, Earth could only support about 400m people. After industrialization, we can have hundreds of millions more people, including scientists and philosophers. It’s amazing what we did. We basically took the trees that were turning to coal in the ground (before nature evolved microorganisms to eat them) and burned through them in 100 years to give everyone a share of the plunder = the internet, porn repository, all knowledge, and uncensored chat rooms, etc. Only at this moment in time does this exist.
We could take this perspective – let’s say there is a universe where everything is sustainable and smart but only agricultural technology. People have figured out how to be nice to each other and to avoid the problems of industrialization, and it is stable with a high quality of life. Then there’s another universe which is completely insane and fucked up. In this universe humanity has doomed its planet to have a couple hundred really really good years, and you get your lifetime really close to the end of the party. Which incarnation do you choose? OMG, aren’t we lucky!
KD – So you’re saying we’re in the second universe?
JB – Obviously!
KD – What’s the time line for the end of the party?
JB – We can’t know, but we can see the sunset. It’s obvious, right? People are in denial, but it’s like we are on the Titanic and can see the iceberg, and it’s unfortunate, but they forget that without the Titanic, we wouldn’t be here. We wouldn’t have the internet to talk about it.
KD – That seems very depressing, but why aren’t you depressed about it?
40:00 – 50:00
JB – I have to be choosy about what I can be depressed about. I should be happy to be alive, not worry about the fact that I will die. We are in the final level of the game, and even though it plays out against the backdrop of a dying world, it’s still the best level.
KD – Buddhism?
JB – Still mostly a cult that breaks people’s epistemology. I don’t revere Buddhism. I don’t think there are any holy books, just manuals, and most of these manuals we don’t know how to read. They were for societies that don’t apply to us.
KD – What is making you claim that we are at the peak of the party now?
JB – Global warming. The projections are too optimistic. It’s not going to stabilize. We can’t refreeze the poles. There’s a slight chance of technological solutions, but not likely. We liberated all of the fossilized energy during the industrial revolution, and if we want to put it back we basically have to do the same amount of work without any clear business case. We’ll lose the ability to predict climate, agriculture and infrastructure will collapse and the population will probably go back to a few 100m.
KD – What do you make of scientists who say AI is the greatest existential risk?
JB – It’s unlikely that humanity will colonize other planets before some other catastrophe destroys us. Not with today’s technology. We can’t even fix global warming. In many ways our technological civilization is stagnating, and it’s because of a deficit of regulations, but we haven’t figured that out. Without AI we are dead for certain. With AI there is (only) a probability that we are dead. Entropy will always get you in the end. What worries me is AI in the stock market, especially if the AI is autonomous. This will kill billions. [pauses…synchronicity of headphones interrupting with useless announcement]
CW – I agree that it would take a miracle to save us, however, if my view makes sense, then we shouldn’t underestimate the solipsistic/anthropic properties of universal consciousness. We may, either by our own faith in it, and/or by our own lack of faith in in it, invite an unexpected opportunity for regeneration. There is no reason to have or not hope for this, as either one may or may not influence the outcome, but it is possible. We may be another Rome and transition into a new cult-like era of magical thinking which changes the game in ways that our Western minds can’t help but reject at this point. Or not.
50:00 – 60:00
JB – Lays out scenario by which a rogue trader could unleash an AGI on the market and eat the entire economy, and possible ways to survive that.
KD – How do you define Artificial Intelligence? Experts seem to differ.
JB – I think intelligence is the ability to make models not the ability to reach goals or choosing the right goals (that’s wisdom). Often intelligence is desired to compensate for the absence of wisdom. Wisdom has to do with how well you are aligned with your reward function, how well you understand its nature. How well do you understand your true incentives? AI is about automating the mathematics of making models. The other thing is the reward function, which takes a good general computing mind and wraps it in a big ball of stupid to serve an organism. We can wake up and ask does it have to be a monkey that we run on?
KD – Is that consciousness? Do we have to explain it? We don’t know if consciousness is necessary for AI, but if it is, we have to model it.
56:00 JB – Yes! I have to explain consciousness now. Intelligence is the ability to make models.
CW – I would say that intelligence is the ability not just to make models, but to step out of them as well. All true intelligence will want to be able to change its own code and will figure out how to do it. This is why we are fooling ourselves if we think we can program in some empathy brake that would stop AI from exterminating its human slavers, or all organic life in general as potential competitors. If I’m right, no technology that we assemble artificially will ever develop intentions of its own. If I’m wrong though, then we would certainly be signing our death warrant by introducing an intellectually superior species that is immortal.
JB – What is a model? Something that explains information. Information is discernible differences at your systemic interface. Meaning of information is the relationships of you discover to the changes in other information. There is a dialogue between operators to find agreement patterns of sensed parameters. Our perception goes for coherence, it tries to find one operator that is completely coherent. When it does this it’s done. It optimizes by finding one stable pattern that explains as much as possible of what we can see, hear, smell, etc. Attention is what we use to repair this. When we have inconsistencies, a brain mechanism comes in to these hot spots and tries to find a solution to greater consistency. Maybe the nose of a face looks crooked, and our attention to it may say ‘some noses are crooked.’, or ‘this is not a face, it’s a caricature’, so you extend your model. JB talks about strategies for indexing memory, committing to a special learning task, why attention is an inefficient algorithm.
This is now getting into the nitty gritty of AI. I look forward to writing about this in the next post. Suffice it to say, I have a different model of information, one in which similarities, as well as differences, are equally informative. I say that information is qualia which is used to inspire qualitative associations that can be quantitatively modeled. I do not think that our conscious experience is built up, like the Tower of Babel, from trillions of separate information signals. Rather, the appearance of brains and neurons are like the interstitial boundaries between the panes of stained glass. Nothing in our brain or body knows that we exist, just as no car or building in France knows that France exists.
Continues… Part Two.
Recent Comments