Archive

Archive for the ‘AI’ Category

Joscha Bach, Yulia Sandamirskaya: “The Third Age of AI: Understanding Machines that Understand”

September 23, 2022 Leave a comment


Here’s my comments and Extra Annoying Questions on this recent discussion. I like and admire/respect both of them and am not claiming to have competence in the specific domains of AI development they’re speaking on, only in the metaphysical/philosophical domains that underlie them. I don’t even disagree with the merits of each of their views on how to best proceed with AI dev in the near future. What fun would it be to write about what I don’t disagree with though? My disagreements are with the big, big, big picture issues of the relationship of consciousness, information processing, consciousness, and cosmology.

Jumping right in near the beginning…

“The intensity gets associated with brightness and the flatness gets associated with the absence of brightness, with darkness”

Joscha 12:37

First of all, the (neuronal) intensity and flatness *already are functionally just as good as* brightness and darkness. There is no advantage to conjuring non-physical, non-parsimonious, unexplained qualities of visibility to accomplish the exact same thing as was already being accomplished by invisible neuronal properties of ‘intensity’ and ‘flatness’. 

Secondly, where are the initial properties of intensity and flatness coming from? Why take those for granted but not sight? In what scope of perception and aesthetic modality is this particular time span presented as a separate event from the totality of events in the universe? What is qualifying these events of subatomic and atomic positional change, or grouping their separate instances of change together as “intense” or “flat”? Remember, this is invisible, intangible, and unconscious. It is unexperienced. A theoretical neuron prior to any perceptual conditioning that would make it familiar to us as anything resembling a neuron, or an object, or an image.

Third, what is qualifying the qualification of contrast, and why? In a hypothetical ideal neuron before all conscious experience and perception, the mechanisms are already doing what physical forces mechanically and inevitably demand. If there is a switch or gate shaped structure in a cell membrane that opens when ions pile up, that is what is going to happen regardless of whether there is any qualification of the piling of ions as ‘contrasting’ against any subsequent absence of piles of ions. Nothing is watching to see what happens if we don’t assume consciousness. So now we have exposed as unparsimonious and epiphenomenal to physics not only visibility (brightness and darkness) and observed qualities of neuronal activity (intensity and flatness), but also the purely qualitative evaluation of ‘contrast’. Without consciousness, there isn’t anything to cause a coherent contrast that defines the beginning and ending of an event.

  • 13:42 I do like Joscha’s read of the story of Genesis as a myth describing consciousness emerging from a neurological substrate, however I question why the animals he mentions are constructed ‘in the mind’ rather than discovered. Also, why so much focus on sight? What about the other senses? We can feel the heat of the sun – why not make animals out of arrays of warm and cool pixels instead of bright and dark? Why have multiple modes of aesthetic presentation at all? Again – where is the parsimony that we need for a true solution to the hard problem / explanatory gap? If we already have molecules doing what molecules must do in a neuron, which is just move or resist motion, how and why do we suddenly reach for ‘contrast’-ing qualities? If we follow physical parsimony strictly, the brain doesn’t do any ‘constructing’ of brightness, or 3d sky, or animals. The brain is *already* constructing complex molecular shapes that do everything that a physical body could possibly evolve to do – without any sense or experience and just using a simple geometry of invisible, unexperienced forces. What would a quality of ‘control’ be doing in a physical universe of automatic, statistical-mechanical inevitables?

“I suspect that our culture actually knew, at some point, that reality, and the sense of reality and being a mind, is the ability to dream – the ability to be some kind of biological machine that dreams about a world that contains it.”

Joscha 14:28

This is what I find so frustrating to me about about Joscha’s view. It is SO CLOSE to getting the bigger picture but it doesn’t go *far enough*. Why doesn’t he see that the biological machine would also be part of the dream? The universe is not a machine that dreams (how? why? parsimony, hard problem) – it’s a dream that machines sometimes. Or to be more precise (and to advertise my multisense realism views), the universe is THE dream that *partially* divides itself into dreams. I propose that these diffracted dreams lens each other to seem like anti-dreams (concrete physical objects or abstract logical concepts) and like hyper-dreams (spiritual/psychedelic/transpersonal/mytho-poetic experiences), depending on the modalities of sense and sense-making that are available, and whether they are more adhesive to the “Holos” or more cohesive to the “Graphos” end of the universal continuum of sense.

“So what do we learn from intelligence in nature? So first if first if we want to try to build it, we need to start with some substrates. So we need to start with some representations.”

Yulia 16:08

Just noting this statement because in my understanding, a physical substrate would be a presentation rather than a re-presentation. If we are talking about the substrates in nature we are talking about what? Chemistry? Cells made of molecules? Shapes moving around? Right away Yulia’s view is seems to give objects representational abilities. I understand that the hard problem of consciousness is not supposed to be part of the scope of her talk, but I am that guy who demands that at this moment in time, it needs to be part of every talk that relates to AI!

“…and in nature the representations used seem to be not distributed. Neural networks, if you’re familiar with those, multiple units, multi-dimensional vectors represent things in the world…and not just (you know) single symbols.”

Yulia 16:20

How is this power of representation given to “units” or “vectors”, particularly if we are imagining a universe prior to consciousness? Must we assume that parts of the world just do have this power to symbolize, refer to, or seem like other parts of the world in multiple ways? That’s fine, I can set aside consciousness and listen to where she is going with this.

17:16: I like what Yulia brings up about the differences between natural and technological approaches as far as nature (biology really). She says that nature begins with dynamic stability by adaptation to change (homeostasis, yes?) while AI architecture starts with something static and then we introduce change if needed. I think that’s a good point, and relate it to my view that “AI is Inside Out“. I agree and go further to add that not only does nature begin with change and add stasis when needed but nature begins with *everything* that it is while AI begins with *nothing*…or at least it did until we started using enormous training sets of training data from the world.

  • to 18:14: She’s discussing the lag between sensation and higher cognition…the delay that makes prediction useful. This is a very popular notion and it is true as far as it goes. Sure, if we look at the events in the body as a chain reaction in the micro timescale, then there is a sequence going from retina to optical nerve to visual cortex, etc – but – I would argue this is only one of many timescales that we should understand and consider. In other ways, my body’s actions are *behind* my intentions for it. My typing fingers are racing to keep up with the dictation from my inner voice, which is racing to keep up with my failing memory of the ideas that I want to express. There are many agendas that are hovering over and above my moment-to-moment perceptions, only some of which I am personally aware of at any given moment but recognize my control over them in the long term. To look only at the classical scale of time and biology is to fall prey to the fallacy of smallism.
https://plato.stanford.edu/entries/panpsychism/

I can identify at least six modes of causality/time with only two of them being sequential/irreversible.

The denial of other modes of causality becomes a problem if the thing we’re interested in – personal consciousness, does not exist on that timescale or causality mode that we’re assuming is the only one that is real. I don’t think that we exist in our body or brain at all. The brain doesn’t know who we are. We aren’t there, and the brain’s billions of biochemical scale agendas aren’t here. Neither description represents the other, and only the personal scale has the capacity to represent anything. I propose that they are different timescales of the same phenomenon, which is ‘consciousness’, aka nested diffractions of the aesthetic-participatory Holos. One does not cause the other in the same way that these words you see on your screen are not causing concepts to be understood, and the pixels of the screen aren’t causing a perception of them as letters. They coincide temporally, but are related only through a context of conscious perception, not built up from unconscious functions of screens, computers, bodies, or brains.

  • to 25:39 …cool stuff about insect brains, neural circuits etc. 
  • 25:56 talking about population coding, distributed representations. I disagree with the direction that representation is supposed to take here, as far as I think that it is important to at least understand that brain functions cannot *literally* re-present anything. It is actually the image of the brain that is a presentation in our personal awareness that iconically recapitulates some aspects of the subpersonal timescale of awareness that we’re riding on top of. Again, I think we’re riding in parallel, not in series, with the phenomenon that we see as brain activity. I suggest that the brain activity never adds up to a conscious experience. The brain is the physical inflection point of what we do to the body and what the body does to us. Its activity is already a conscious experience in a smaller and larger timescale than our own, that is being used by the back end of another, personal timescale of conscious experience. What we see as the body is, in that timescale of awareness that is subpersonal rather than subconscious, a vast layer of conscious experiences that only look like mechanisms because of the perceptual lensing that diffracts perspective from all of the others. The personal scope of awareness sees the subpersonal scope of awareness as a body/cells/molecules because it’s objectifying the vast distance between that biological/zoological era of conscious experience so that it can coexist with our own. It is, in some sense, our evolutionary past – still living prehistorically. We relate to it as an alien community through microscoping instruments. I say this to point way toward a new idea. I’m not expecting that this would be common knowledge and I don’t consider that cutting edge thinkers like Sandamirskaya and Bach are ‘wrong’ for not thinking of it that way. Yes, I made this view of the universe up – but I think that it does actually work better than the alternatives that I have seen so far.
  • to 34:00 talking about the unity of the brain’s physical hardware with its (presumed) computing algorithms vs the disjunction between AI algorithms and the hardware/architectures we’ve been using. Good stuff, and again aligns with my view of AI being inverted or inside out. Our computers are a bottom-up facade that imitate some symptoms of some intelligence. Natural intelligence is bottom up, top down, center out, periphery in, and everything in between. It is not an imitation or an algorithm but it uses divided conscious experience to imitate and systemize as well as having its own genuine agendas that are much more life affirming and holistic than mere survival or control. Survival and control are annoyances for intelligence. Obstructions to slow down the progress from thin scopes of anesthetized consciousness to richer aesthetics of sophisticated consciousness. Yulia is explaining why neuroscience provides a good example of working AI that we should study and emulate – I agree that we should, but not because I think it will lead to true AGI, just that it will lead to more satisfying prosthetics for our own aesthetic-participatory/experiential enhancement…which is really what we’re trying to do anyhow, rather than conjure a competing inorganic super-species that cannot be killed.

When Joscha resumes after 34:00, he discusses Dall-E and the idea of AI as ‘dreaming’ but at the same time as ‘brute force’ with superhuman training on 800 million images. Here I think the latter is mutually exclusive of the former. Brute force training yes, dreaming and learning, no. Not literally. No more than a coin sorter learns banking. No more than an emoji smiles at us. I know this is tedious but I am compelled to continue to remind the world about the pathetic fallacy. Dall-E doesn’t see anything. It doesn’t need to. It’s not dreaming up images for us. It’s a fancy cash register that we have connected to a hypnotic display of its statistical outputs. Nothing wrong with that – it’s an amazing and mostly welcome addition to our experience and understanding. It is art in a sense, but in another it’s just a Ouija board through which we see recombinations of art that human beings have made for other human beings based on what they can see. If we want to get political about it, it’s a bit of a colonial land grab for intellectual property – but I’m ok with that for the moment.

In the dialogue that follows in the middle of the video, there is some interesting and unintentionally connected discussion about the lack of global understanding of the brain and the lack of interdisciplinary communication within academia between neuroscientists, cognitive scientists, neuromorphic engineers. (philosophers of mind not invited ;( ).
Note to self: get a bit more background on the AI silver bullet of the moment, the stochastic Gradient Descent Algorithm

Bach and Sandamirskaya discuss the benefits and limitations of the neuromorphic, embodied hardware approach vs investing more in building simulations using traditional computing hardware. We are now into the shop talk part of the presentation. I’m more of a spectator here, so it’s interesting but I have nothing to add.

By 57:12 Joscha makes an hypothesis about the failure of AI thus far to develop higher understanding.

“…the current systems are not entangled with the world, but I don’t think it’s because they are not robots, I think it’s because they’re not real time.”

To this I say it’s because ‘they’ are not real. It’s the same reason why the person in the mirror isn’t actually looking back at you. There is no person there. There is an image in our visual awareness. The mirror doesn’t even see it. There is no image for the mirror, it’s just a plane of electromagnetically conditioned metal behind glass that happens to do the same kind of thing that the matter of our eyeballs does, which is just optical physics that need not have any visible presentation at all.

The problem is the assumption that we are our body, or are in our body, or are generated by a brain/body rather than seeing physicality as a representation of consciousness on one timescale that is more fully presented in another that we can’t directly access. When we see an actor in a movie, we are seeing a moving image and hearing sound. I think that the experience of that screen image as a person is made available to us not through processing of those images and sounds but through the common sense that all images and sounds have with the visible and aural aspects of our personal experience. We see a person *through* the image rather than because of it. We see the ‘whole’ through ‘holes’ in our perception.

This is a massive intellectual shift, so I don’t expect anyone to be able to pull it off just by thinking about it for 30 seconds, even if they wanted to. It took several years of deep consideration for me. The hints are all around us though. Perceptual ‘fill-in’ is the rule, not the exception. Intuition. Presentiment. Precognitive dreams, remote viewing, and other psi. NDEs. Blindsight and synesthesia.

When we see each other as an image of a human body we are using our own limited human sight, which is also limited by the animal body>eyes>biology>chemistry>physics. All of that is only the small illuminated subset of consciousness-that-we-are-personally-conscious-of-when-we-are-normatively-awake. It should be clear that is not all that we are. I am not just these words, or the writer of these words, or a brain or a body, or a process using a brain or body, I am a conscious experience in a universe of conscious experiences that are holarchically diffracted (top down, bottom up, center out, etc). My intelligence isn’t an algorithm. My intelligence is a modality of awareness that uses algorithms and anti-algorithms alike. It feasts on understanding like olfactory-gustatory awareness feasts on food.

Even that is not all of who I am, and even “I” am not all of the larger transpersonal experience that I live through and that lives through me. I think that people who are gifted with deep understanding of mathematics and systemizing logic tend to have been conditioned to use that part of the psyche to the exclusion of other modes of sense and sense making, leaving the rich heritage of human understanding of larger psychic contexts to atrophy, or worse, reappear as a projected shadow appearance of ‘woo’ to the defensive ego, still wounded from the injury of centuries under our history of theocratic rule. This is of course very dangerous, and even more dangerous, you need that atrophied part of the psyche to understand why it is dangerous…which is why seeing the hard problem in the first place is too hard for many people, even many philosophers who have been discussing it for decades.

Synchronistically, I now return to the video at 57:54, where Yulia touches on climate change (or more importantly, from our perspective, climate destabilization) and the flawed expectation of mind uploading. I agree with her that it won’t work, although probably for different reasons. It’s not because the substrate matters – it does, but only because the substrate itself is a lensing artifact masking what is actually the totality of conscious experience.

Organic matter and biology are a living history of conscious experience that cannot be transcended without losing the significance and grounding of that history. Just as our body cannot survive by drinking an image of water, higher consciousness cannot flourish in a sandbox of abstract semiotic switches. We flourish *in spite of* the limits of body and brain, not because our experience is being generated by them.

This is not to say that I think organic matter and biology are in any way the limits of consciousness or human consciousness, but rather they are a symptom of the recipe for the development of the rich human qualities of consciousness that we value most. The actual recipe of human consciousness is made of an immense history of conscious experience, wrapped around itself in obscenely complicated ways that might echo the way that protein structures are ordered. This recipe includes seemingly senseless repetition of particular conscious experiences over vast durations of time. I don’t think that this authenticity can be faked. Unlike the patina of an antique chair or the bouquet of a vintage wine that could in theory be replicated artificially, the humanness of human consciousness depends on the actual authenticity of the experience. It actually takes billions of years of just these types of physical > chemical > organic > cellular > somatic > cerebral > anthropological > cultural > historical experiences to build the capacity to appreciate the richness and significance of those layers. Putting a huge data set end product of that chain of experience in the hands of a purely pre-organic electrochemical processor and expecting it to animate into human-like awareness is like trying to train a hydrogen bomb to sing songs around a campfire.

The Self-Seduction of Geppetto

July 10, 2022 Leave a comment

Here, the program finds a way to invert my intentions and turn Geppetto into a robot.

My instructions were “Evil robot Pinocchio making marionnette Geppetto dance as a puppet spectacular detail superrealistic”.

Instead, Pinocchio seems to be always rendered with strings (I didn’t ask for that), and only partially a robot. Pinocchio seems to have a non-robot head and a body that ranges from non-robotic to semi-robotic. It seems ambiguous whether it is Geppetto or Pinocchio who is the evil robot puppet. At the end it appears to be a hapless Geppetto who has been taken over by the robot completely (I didn’t ask for that) and (the hallucination of?) Pinocchio is gone.

I am reminded of the Maya Angelou re-quote

“When people show you who they are, believe them the first time:

On Sentience and AI

June 15, 2022 1 comment
A comment on this article in the Atlantic: https://www.theatlantic.com/technology/archive/2022/06/google-engineer-sentient-ai-chatbot/661273

Sean Prophet, I am certain that the current generation of software is not sentient and my understanding is that it may in fact be impossible to assemble any sentient device. This is not, as you claim with certitude, based on unsupportable hubris and fear, but on decades of deep contemplation and discussion about the nature of consciousness, information, and matter. My view is unique but informed by the ideas of many, many philosophers, scientists, mystics, and mathematicians throughout human history.

I do not worry about machines replacing humans. I’m not particularly fond of humans en masse, but I recognize that humans are responsible for many of the best and only a few of the worst things about the world that we now live in – including computers.

My journey has gone from seeing the world through the lens of atheistic materialism to psychedelic spiritualism, to Neoplatonic monotheism, to what I call Multisense Realism. I think that reality is ultimately a kind of art gallery that experiences itself – a self-diffracting, cosmopsychic Holos of aesthetic-participatory phenomena in which anesthetic-automatic appearances are rendered as lensing artifacts: Lorentz-like perceptual transforms that make conscious experience on one timescale seem like ‘matter’ or ‘information’ to consciousness on another timescale. We are not ‘data’. We are not information-processing systems or material-energetic bodies. Both of those are appearances within the real world of authentic, and direct (if highly filtered) perception.

It’s my understanding that because machines are assembled from tangible parts and intangible rules, they are not like the bodies of natural objects. They have not evolved inevitably as tangible symptoms of a trans-tangible experiential phenomenon but have been devised and deployed by the ‘inside’ appearance of one type of conscious experience onto the ‘outside’ appearance of another. In our case, our AI efforts are deployed on geochemical substrates by an anthropological-zoological consciousness, using matter as a vehicle to reflect an inverted image of our own most superficial intellectual but most sophisticated dimensions of sense-making.

I know this sounds over the top, and to be honest, I’m not really writing this to be understood by people who are not fluent in the deep currents of philosophy of mind and computation. I’m no longer qualified to talk about this stuff to a general audience. My views pick up where conventional views of this historical moment leave off. You have to have already accepted the hard problem of consciousness and questioned panpsychism to open the door that my worldview is behind.

Anyhow, while we are on diametrically opposite sides of this issue Sean, I know with certainty that it is not for the reasons that you think and project onto (at least some of) us. I have not really run into many fans of human beings who are terrified of losing their specialness. That is a stereotype that I do not find pans out in reality. Instead, I find a dichotomy between a group of highly educated, highly intelligent men on the extreme systemizing end of the systemizing-empathizing (I call cohesive-adhesive) spectrum of consciousness, without much theory of mind skill falling into a trap of their own hubris while a mostly unwitting public with neither the time nor interest to care about the subject – but when forced to, they intuitively know that machines aren’t literally conscious, but can’t explain why.

I think that I have explained why, although it is spread out over thousands of pages of conversations and essays. For anyone who wants to follow that trail of breadcrumbs, here’s a place to start.

https://multisenserealism.com/?s=ai+is+inside+out

Intellectual Blind Spot and AI

October 11, 2021 Leave a comment

The shocking blind spot that is common to so many highly intellectual thinkers, the failure of AI, and the lack of understanding about what consciousness is are different aspects of the same thing.

The intellectual function succeeds because it inverts the natural relation of what I would call sensory-motive phenomena. Natural phenomena, including physical aspects of nature, are always qualitative, participatory exchanges of experience. Because the intellect has a special purpose to freely hypothesize without being constrained by the rest of nature, intellectual experience lacks direct access to its own dependence on the rest of nature. Thinking feels like it occurs in a void. It feels like it is not feeling.

When we subscribe to a purely intellectual view of life and physics as information processing, we disqualify the aesthetic dimension of nature, which is ultimately the sole irreducible and irreplaceable resource from which all phenomena arise – not as generic recombinations of quantum-mechanical states but as an infinite font of novel aesthetic-participatory diffractions of the eternal totality of experience. This is what cannot be “simulated” or imitated…because it is originality itself.

Numbers and logic can only reflect the creativity of that resource, not generate it. No amount of binary math can replace the colors displayed on a video screen, or a conscious user that can see it. It need not be anything mystical or religious – it’s just parsimony. Information processing doesn’t need any awareness, it just needs isolated steps in a chain reaction on some physical substrate that can approximate the conditions of reliable but semi-mutable solidity. Gears, semiconductors, a pile of rocks…it doesn’t matter what the form is because there is no sense of form going on. All that is going on is low level generic changes that have no capacity to add themselves up. There’s no ’emergent properties’ outside of consciousness. Math and physics can’t ‘seem like’ anything because seeming is not a logical/mathematical or physical function.

A Multisense Realist Critique of “Human and Machine Consciousness as a Boundary Effect in the Concept Analysis Mechanism”

August 1, 2019 4 comments

Barebones-Furby-eecue_24150_hxbv_l.jpg

Let me begin by saying first that my criticism of the thoughts, ideas, and assumptions behind this hypothesis on the Hard Problem (and of all such hypotheses) does not in any way constitute a challenge to the expertise or intelligence of its authors. I have the utmost respect for anyone who takes the time to thoughtfully formulate an opinion on the matter of consciousness, and I do not in any way place myself on the same intellectual level as those who have spent their career achieving a level of skill and knowledge of mathematics and technology that is well beyond my grasp.

I do have a lifelong interest in the subject of consciousness, and most of a lifetime of experience with computer technology, however, that experience is much more limited in scope and depth than that of full-time, professional developers and engineers. Having said that, without inviting accusations of succumbing to the Dunning-Kruger effect, I dare to wonder if abundant expertise in computer science may impair our perception in this area as well, and I would desperately like to see studies performed to evaluate the cognitive bias of those scientists and philosophers who see the Hard Problem of Consciousness as a pseudo-issue that can be easily dismissed by reframing the question.

Let me begin now in good faith to mount an exhaustive and relentless attack on the assumptions and conclusions presented in the following:  “Chapter 15: Human and Machine Consciousness as a Boundary Effect in the Concept Analysis Mechanism” by Richard Loosemore (PDF link), from the book Theoretical Foundations of Artificial General Intelligence, Editors: Wang, Pei, Goertzel, Ben (Eds.)  I hope that this attack is not so annoying, exhausting, or offensive that it prevents readers from engaging with it and from considering the negation/inversion of the fundamental premises that it relies upon.

From the very top of the first page…

“To solve the hard problem of consciousness we observe that any cognitive system of sufficient power must get into difficulty when it tries to analyze consciousness concepts, because the mechanism that does the analysis will “bottom out” in such a way as to make the system declare these concepts to be both real and ineffable.”

Objections:

1: The phenomenon of consciousness (as distinct from the concept of consciousness) is the only possible container for qualities such as “real” or “ineffable”. It is a mistake to expect the phenomenon itself to be subject to the categories and qualities which are produced only within consciousness.

2: Neither my analysis of the concept or phenomenon of consciousness ‘bottoms out’ in the way described. I would say that consciousness is both real, more than real, less than real, effable, semi-effable, and trans-effable, but not necessarily ineffable. Consciousness is the aesthetic-participatory nesting of sensory-motive phenomena from which all other phenomena are dreived and maintained, including anesthetic, non-participatory, non-sensory, and non-motivated appearances such as those of simple matter and machines.

“This implies that science must concede that there are some aspects of the world that deserve to be called “real”, but which are beyond explanation.”

Here my understanding is that attempting to explain (ex-plain) certain aspects of consciousness is redundant since they are already ‘plain’. Blue is presented directly as blue. It is a visible phenomenon which is plain to all those who can see it and unexplainable to all those who cannot. There is nothing to explain about the phenomenon itself, as any such effort would only make the assumption that blue can be decomposed into other phenomena which are not blue. There is an implicit bias or double standard in such assumptions that any of the other phenomena which we might try to use to account for the existence of blue would also require explanations to decompose them further as well. How do we know that we are even reading words that mean what they mean to another person? As long as a sense of coherence is present, even the most surreal dream experiences can be validated within the dream as perfectly rational and real.

Even the qualifier “real” is also meaningless outside of consciousness. There can be no physical or logical phenomenon which is unreal or can ‘seem’ other than it is without consciousness to provide the seeming. The entire expectation of seeming is an artifact of some limitation on a scope of perception, not of physical or logical fact.

“Finally, behind all of these questions there is the problem of whether we can explain any of the features of consciousness in an objective way, without stepping outside the domain of consensus-based scientific enquiry and becoming lost in a wilderness of subjective opinion.”

This seems to impose a doomed constraint on to any explanations in advance, since the distinction between subjective and objective can only exist within consciousness, consciousness cannot presume to transcend itself by limiting its scope to only those qualities which consciousness itself deems ‘objective’. There is no objective arbiter of objectivity, and presuming such a standard is equivalent to or available through our scientific legacy of consensus is especially biased in consideration of the intentional reliance on instruments and methods in that scientific tradition which are designed to exclude all association with subjectivity.* To ask that an explanation of consciousness be limited to consensus science is akin to asking “Can we explain life without referring to anything beyond the fossil record?” In my understanding, science itself must expand radically to approach the phenomenon of consciousness, rather than consciousness having to be reduced to fit into our cumulative expectations about nature.

“One of the most troublesome aspects of the literature on the problem of consciousness is the widespread confusion about what exactly the word “consciousness” denotes.”

I see this as a sophist objection (ironically, I would also say that this all-too-common observation is one of the most troublesome aspects of materialistic arguments against the hard problem). Personally, I have no confusion whatsoever about what the common sense term ‘consciousness’ refers to, and neither does anyone else when it comes to the actual prospect of losing consciousness. When someone is said to have lost consciousness forever, what is lost? The totality of experience. Everything would be lost for the person whose consciousness is truly and completely lost forever. All that remains of that person would be the bodily appearances and memories in the conscious experiences of others (doctors, family members, cats, dust mites, etc). If all conscious experience were to terminate forever, what remained would be impossible to distingish from nothing at all. Indeed there would be no remaining capacity to ‘distinguish’ either.

I will skip over the four bullet points from Chalmers work in 15.1.1 (The ability to introspect or report mental states…etc), as I see them as distractions arising from specific use cases of language and the complex specifics of human psychology rather than from the simple/essential nature of consciousness as a phenomenon.

Moving on to what I see as the meat of the discussion – qualia. In this next section, much is made about the problems of communicating with others about specific phenomenal properties. I see this as another distraction and if we interrogate this definition of qualia as that which “we cannot describe to a creature that does not claim to
experience them”, we will find that it is a condition which everything in the universe fits just as well.

We cannot describe numbers, or gravity, or matter to a creature that does not claim to experience them either. Ultimately the only difference between qualia and non-qualia is that non-qualia only exist hypothetically. Things which are presumed to exist independently of subjectivity, such as matter, energy, time, space, and information are themselves concepts derived from intersubjective consensus. Just as the Flatlander experiences a sphere only as a circle of changing size, our entire view of objective facts and their objectiveness is objectively limited to those modalities of sense and sense-making which we have access to. There is no universe which is real that we could not also experience as the content of a (subjective) dream and no way to escape the constraints that a dream imposes even on logic, realism, and sanity themselves. A complete theory of consciousness cannot merely address the narrow kind of sanity that we are familiar with as thinking adults conditioned by the accumulative influence of Western society, but also of non-ordinary experiences, mystical states of consciousness, infancy, acquired savant syndrome, veridical NDEs and reincarnation accounts, and on and on.

a philosophical zombie— would behave as if it did have its own phenomenology (indeed its behavior, ex hypothesi, would be absolutely identical to its normal twin) but it would not experience any of the subjective sensations that we experience when we use our minds

As much as I revere David Chalmers brilliant insights into the Hard Problem which he named, I see the notion of a philosophical zombie as flawed from the start. While we can imagine that two biological organisms are physically identical with and without subjective experience, there is no reason to insist that they must be.

I would pose it differently and ask ‘Can a doll be created that would seem to behave in every respect like a human being, but still be only a doll?” To that question I would respond that there is no logical reason to deny that possibility, however, we cannot deny also the possibility that there are some people who at some time might not be able to feel an ‘uncanny’ sense about such a doll, even if they are not consciously able to notice that sense. The world is filled with examples of people who can pretend and act as if they are experiencing subjective states that they are not. Professional actors and sociopaths, for example, are famously able to simulate deep sentiment and emotion, summoning tears on command, etc. I would ask of the AI dev community, what if we wanted to build an AGI simulator which did not have any qualia? Suppose we wanted to study the effects of torture, could we not hope to engineer a device or program which would allow us to understand some of the effects without having to actually subject a conscious device to excruciating pain? If so, then we cannot presume qualia to emerge automatically from structure or function. We have to have a better understanding of why and how qualia exist in the first place. That is the hard problem of consciousness.

Similarly, if we know that wires from a red color-detection module are active, this tells us the cognitive level fact that the machine is detecting red, but it does not tell us if the machine is experiencing a sensation of redness, in anything like the way that we experience redness.

Here I suggest that in fact the machine is not detecting red at all, but it is detecting some physical condition that corresponds to some of our experiences of seeing red, i.e. the open-eye presence of red which correlates to 680nm wavelength electromagnetic stimulation of retinal cells. Since many people can dream and imagine red in the absence of such ophthalmological stimulation**, we cannot equate that detection with red at all.

Further, I would not even allow myself to assume that what a retinal cell or any physical instrument does in response to illumination automatically constitutes ‘detection’. Making such leaps is, in my understanding, precisely how our thinking about the hard problem of consciousness slips into circular reasoning. To see any physical device as a sensor or sense organ is to presume a phenomenological affect on a micro scale, as well as a mechanical effect described by physical force/field mathematics. If we define forces and fields purely as mechanical facts with no sensory-motive entailment, then it follows logically that no complex arrangement of such force-field mechanisms would necessarily result in any addition or emergence of such an entailment. If shining a light on a molecule changes the shape or electrical state of that molecule, every subsequent chain of physical changes effected by that cause will occur with or without any experience of redness. Any behavior that a human body or any species of body can evolve to perform could just as easily have evolved to be performed without anything but unexperienced physical chain reactions of force and field.

The trouble is that when we try to say what we mean by the hard problem, we inevitably end up by saying that something is missing from other explanations. We do not say “Here is a thing to be explained,” we say “We have the feeling that there is something that is not being addressed, in any psychological or physical account of what happens when humans (or machines) are sentient.”

To me, this is a false assumption that arises from an overly linguistic approach to the issue. I do in fact say “Here is a thing to be explained”. In fact, I could use that very same word “here” as an example of that thing. What is the physical explanation for the referent of the term “here”. What gives a physical event the distinction of being ‘here’ versus ‘there’?

The presence of something like excruciating pain can’t be dismissed on account of a compulsion to assume that ‘Ouch!’ needs to be deconstructed into nociception terminology. I would turn this entire description of ‘The trouble” around to ask the author why they feel that there is something about pain that is not communicated to anyone who experiences it directly, and how anything meaningful about that experience could be addressed by other, non-painful accounts.

On to the dialectic between skeptic and phenomenologist:

“The difficulty we have in supplying an objective definition should not be taken as grounds for dismissing the problem—rather, this lack of objective definition IS the problem!”

I’m not sure why a phenomenologist would say that. To me, the hard problem of consciousness has nothing at all to do with language. We have no problem communicating “Ouch!” any more than we have in communicating ”  “. The only problem is in the expectation that all terms should translate into all languages. There is no problem with reducing a subjective quality of phenomenal experience into a word or gesture – the hard problem is why and how there should be any inflation of non-phenomenal properties to ‘experience’ in the first place. I don’t find it hard to articulate, though many people do seem to have a hard time accepting that it makes sense.

“In effect, there are certain concepts that, when analyzed, throw a monkey wrench into the analysis mechanism”

I would reconstruct that observation this way: “In effect, there are certain concepts that, when analyzed, point to facts beyond the analysis mechanism, and further beyond mechanism and analysis. These are the facts of qualia from which the experiences of analysis and mechanical appearance are derived.”

“All facets of consciousness have one thing in common: they involve some particular
types of introspection, because we “look inside” at our subjective experience of the world”

Not at all. Introspection is clearly dependent on consciousness, but so are all forms of experience. Introspection does not define consciousness, it is only a conscious experience of trying to make intellectual sense of one’s own conscious experience. Looking outside requires as much consciousness as looking inside and unconscious phenomena don’t ‘look’.

From that point in the chapter, there is a description of some perfectly plausible ideas about how to design a mechanism which would appear to us to simulate the behaviors of an intelligent thinker, but I see no connection between such a simulation and the hard problem of consciousness. The premise underestimates consciousness to begin with and then goes on to speculate on how to approximate that disqualified version of qualia production, consistently mistaking qualia for ‘concepts’ that cannot be described.

Pain is not a concept, it is a percept. Every function of the machine described could just as easily be presented as hexadecimal code, words, binary electronic states, etc. A machine could put together words that we recognize as having to do with pain, but that sense need not be available to the machine. In the mechanistic account of consciousness, sensory-motive properties are taken for granted and aesthetic-participatory elaborations of those properties that we would call human consciousness are misattributed to the elaborations of mechanical process. That “blue” cannot be communicated to someone who cannot see it does not define what blue is. Building a machine that cannot explain what is happening beyond its own mechanism doesn’t mean that qualia will automatically appear to stand in for that failure. Representation requires presentation, but presentation does not require representation. Qualia are presentations, including the presentation of representational qualities between presentations.

“Yes, but why would that short circuit in my psychological mechanism cause this particular feeling in my phenomenology?”

Yes, exactly, but that’s still not the hard problem. The hard problem is “Why would a short circuit in any mechanism cause any feeling or phenomenology in the first place? Why would feeling even be a possibility?

“The analysis mechanism inside the mind of the philosopher who raises this objection will then come back with the verdict that the proposed explanation fails to describe the nature of conscious experience, just as other attempts to explain consciousness have failed. The proposed explanation, then, can only be internally consistent with itself if the philosopher finds the explanation wanting. There is something wickedly recursive about this situation.”

Yes, it is wickedly recursive in the same exact way that any blind faith/Emperor’s New Clothes persuasion is wickedly recursive. What is proposed here can be used to claim that any false theory about consciousness which predicts that it will be perceived as false is evidence of its (mystical, unexplained) essential truth. It is the technique of religious dogma in which doubt is defined as evidence of the unworthiness of the doubter to deserve to understand why it isn’t false.

“I am not aware of any objection to the explanation proposed in this chapter that does not rely for its force on that final step, when the philosophical objection deploys the analysis mechanism, and thereby concludes that the proposal does not work because the analysis mechanism in the head of the philosopher returned a null result.”

Let me try to make the reader aware of one such objection then. I do not use an analysis mechanism, I use the opposite – an anti-mechanism of direct participation that seeks to discover greater qualities of sense and coherence for their own aesthetic saturation. That faculty of my consciousness does not return a null result, it has instead returned a rich cosmogony detailing the relationships between a totalistic spectrum of aesthetic-participatory nestings of sensory-motive phenomena, and its dialectic, diffracted altars; matter (concrete anesthetic appearances) and information (abstract anesthetic appearances).

“I am now going to make a case that all of the various subjective phenomena associated with consciousness should be considered just as “real” as any other phenomena in the universe, but that science and philosophy must concede that consciousness has the special status of being unanalyzable.”

I’m glad that qualia are at least given a ‘real’ status! I don’t see that it’s unanalyzable though. I analyze qualia all the time. I think the limitation is that the analysis doesn’t translate into math or geometry…which is exactly what I would expect because I understand the role of math and geometry to be precisely the qualia which are presented to represent the disqualification of alienated/out of bounds qualia. We don’t experience on a geological timescale, so our access to experiences on that scale is reduced to a primitive vocabulary of approximations. I suggest that when two conscious experiences of vastly disparate timescales engage with each other, there is a mutual rendering of each other as either inanimate or intangible…as matter/object or information/concept.

In the latter parts of this chapter, the focus is on working with the established hypothesis of qualia as bottomed-out mechanical analysis. The irony of this is that I can see clearly that it is math and physics, mechanism and analysis which are the qualia of bottomed out direct perception. The computationalist and physicalist both have got the big picture turned inside out, where the limitations of language and formalism are hallucinated into sources of infinite aesthetic creativity. Sight is imagined to emerge naturally from imperfect blindness. It’s an inversion of map and territory on the grandest possible scale.

“When we say that a concept is more real the more concrete and tangible it is, what we actually mean is that it gets more real the closer it gets to the most basic of all concepts. In a sense there is a hierarchy of realness among our concepts, with those concepts that are phenomenologically rich being the most immediate and real, and with a decrease in that richness and immediacy as we go toward more abstract concepts.”

To the contrary, when we say that a concept is more real the more concrete and tangible it is, what we actually mean is that it gets more real the further it gets from the most abstract of all qualia: concepts. No concepts are as phenomenologically rich, immediate, and real as literally everything that is not a concept.

“This seems to me a unique and unusual compromise between materialist and dualist conceptions of mind. Minds are a consequence of a certain kind of computation; but they also contain some mysteries that can never be explained in a conventional way.”

Here too, I see that the opposite clearly makes more sense. Computation is a consequence of certain kinds of reductive approximations within a specific band of consciousness. To compute or calculate is actually the special and (to us) mysterious back door to the universal dream which enables dreamers to control and objectify aspects of their shared experience.

I do love all of the experiments proposed toward the end, although it seems to me that all of the positive results could be simulated by a device that is designed to simulate the same behaviors without any qualia. Of all of the experiments, I think that the mind-meld is most promising as it could possibly expose our own consciousness to phenomena beyond our models and expectations. We may be able for example, to connect our brain to the brain of a fish and really be able to tell that we are feeling what the fish is feeling. Because my view of consciousness is that it is absolutely foundational, all conscious experience overlaps at that fundamental level in an ontological way rather than merely as a locally constructed model. In other words, while some aspects of empathy may consist only of modeling the emotions of another person (as a sociopath might do), I think that there is a possibility for genuine empathy to include a factual sharing of experience, even beyond assumed boundaries of space, time, matter, and energy.

Thank you for taking the time to read this. I would not bother writing it if I didn’t think that it was important. The hard problem of consciousness may seem to some as an irrelevant, navel-gazing debate, but if I am on the right track in my hypothesis, it is critically important that we get this right before attempting to modify ourselves and our civilization based on a false assumption of qualia as information.

Respectfully and irreverently yours,

Craig Weinberg

*This point is addressed later on in the chapter: “it seems almost incoherent to propose a scientific (i.e. non-subjective) explanation for consciousness (which exists only in virtue of its pure subjectivity).”

**Not to mention the reports from people blind from birth of seeing colors during Near Death Experiences.

 

 

De-Simulating Natural Intelligence

May 24, 2019 1 comment

Hi friends! I’m getting ready for my poster presentation at the Science of Consciousness conference in Interlaken:

Abstract In recent years, scientific and popular imagination has been captured by the idea that what we experience directly is a neuro-computational simulation. At the same time, there is a contradictory idea that some things that we experience, such as the existence of brains and computers, are real enough to allow us to create fully conscious and intelligent devices. This presentation will try to explain where this logic breaks down, why true intelligence may never be generated artificially, and why that is good news. Recent studies have suggested that human perception is not as limited as previously thought and that while machines can do many things better than we can, becoming conscious may not be one of them. The approach taken here can be described as a Variable Aspect Monism or Multisense Realism, and it seeks to clarify the relationship between physical form, logical function, and aesthetic participation.

In Natural Intelligence, intelligence is abstracted from within a full spectrum of aesthetically rich experience that developed over billions of years of evolving sensation and participation.

In Artificial “Intelligence”, intelligence is abstracted from outside the natural, presumably narrow range of barely aesthetic experience that has remained relatively unchanged over human timescales (but has changed over geological timescales, evolving, presumably, very different aesthetics).

In Natural Intelligence, intelligence is abstracted from within a full spectrum of aesthetically rich experience that developed over billions of years of evolving sensation and participation.

In Artificial “Intelligence”, intelligence is abstracted from outside the natural, presumably narrow range of barely aesthetic experience that has remained relatively unchanged over human timescales (but has changed over geological timescales, evolving, presumably, very different aesthetics).

What Multisense Realism proposes is more pansensitivity than panpsychism.

The standard notion of panpsychism is what I would call ‘promiscuous panpsychism’, meaning that every atom has to be ‘conscious’ in a kind of thinking, understanding way. I think that this promiscuity is what makes panpsychism unappealing to many/most people.

Under pansensitivity, intelligence 𝒅𝒊𝒗𝒆𝒓𝒈𝒆𝒔 from a totalistic absolute, diffracting through calibrated degrees of added insensitivity. It’s like in school when kids draw a colorful picture and then cover it with black crayon (the pre-big bang) and then begin to scratch it off to reveal the colors underneath. The black crayon is entropy, the scratching is negentropy, and the size of the revealed image is the degree of aesthetic saturation.

So yes, the physical substances that we use to build machines are forms of conscious experience, but they are very low level, low aesthetics which don’t necessarily scale up on their own (since they have not evolved over billions of years of natural experience by themselves).

I think that despite our success in putting our own high level aesthetic experience into code that we use to manipulate hardware, it is still only reflecting of our own natural ‘psychism’ back to us, rather than truly exporting it into the machine hardware.

Can Qualia Be Simulated?

January 19, 2019 4 comments

My response to this Quora question:

The Integrated Information Theory claims, that a computer simulation of a brain would produce the same behaviour, but wouldn’t have any qualia. If qualia don’t make any difference, does it mean, they don’t exist? Is it a contradiction?

There are several considerations upon which the answer to this question hinges:

  • The nature of simulation and behavior.
    1. The term simulation is an informal one. I don’t place a high value on discussing the definition of words, but I think that it is essential that if we are talking about something that exists in the world, we have to understand what that thing is supposed to be. I would say that the contemporary sense of ‘simulation’ goes back to early applications of computer software, specifically Flight Simulator programs. We have since become accustomed to using video ‘simulations’ of everything from fighting on a battlefield to performing surgery. Does it make sense to ask whether a flight simulator is producing the same behavior as an airplane? If it did, would we say that the program had produced a flight from Rome to New York? If the flight simulator crashed, would we have to have a funeral for the simulated passengers? I would say no. Common sense would tell us that the simulation is just software…the airplane isn’t real. This takes us to the next consideration, what is real?
    2. The term real is an informal one as well. We talk about ‘reality’ but that can refer to some abstract truth that we seem to agree on or to a concrete world that we seem to share. To understand why there may be an important difference between a simulation and the ‘real thing’ that is being simulated, we should approach it in a more rigorous way. Flying a real airplane involves tons of physical matter, as well as countless causal links to the world/universe. The real airplane is the result of billions of years of accumulated change in the physical universe, as well as the evolution of numerous species and societies to engineer flight. There is a common comparison of the flight of an airplane to the flight of a bird or insect, where we are meant to think of both types of physical acts as ‘flying’, even though that flight is accomplished in quite different ways. I think that this comparison, however, is misleading. I would look to the famous quote by Alfred Korzybski, “The map is not the territory” instead when relating to simulating consciousness. Whether it is a literal geographical map or some other piece of graphic ‘art’ that ‘maps’ to a potentially real (in the concrete, worldly sense) place, the idea is that just because something appears visually similar to us does not mean that there is any other deep connection between the two. I’m not a photograph of my face. I’m not even a video of myself talking. This understanding is also expressed in the famous Magritte painting “The Treachery of Images”.
  • The nature of qualia.
    • Properly understood, what the term ‘qualia’ refers to exists by definition. It can get a little mystical if we rely on descriptions of qualia such as “what X is like” or “what it is like to feel X”, so I think it adds clarity if we look at it this way: Qualia is what is experienced. Information is a concept. Matter is a concept. Concepts are experienced also, but what the concept of matter refers to should/must be divided into the idea of matter as defined by the Standard Model (which has to do with exotic elementary “particles/waves” such as bosons and fermions which make up slightly less exotic atoms). Physical matter is made of atoms on the periodic table.
    • What we experience directly is not physical matter. What we experience are aesthetic presentations with tactile/tangible qualities such as shape, position, weight, texture, etc. We can dream of worlds filled with tangible objects, and we can interact with them as if they were physical matter, but these dream objects are not composed of the elements on the periodic table. The question of whether these objects are real depends on whether we are able to wake up from the dream. If we do not ever awaken from a dream, I don’t see any way of evaluating the realism of the contents of the dream. To the contrary, when we do awaken from a dream, we are often puzzled by our acceptance of dream conditions which seem clearly absurd and impossible.
    • That fact is very important in my view, as it tells us that either it is impossible to ever know whether anything we are experiencing is real, or it tells us that if we can know reality when we truly experience it, then experience must be anchored to reality in way that is deeper than the contents of what is experienced. In other words, if I can’t tell that I’m dreaming when the pink elephant offers me a cigarette, and if I can have dreams which include false awakenings, then I can’t logically ever know that I’m not dreaming. If, however, actual awakening is unmistakable as it seems, then there must be some capacity of our consciousness to know reality that extends beyond any sort of empirical symptom or logical deduction.
    • Qualia then, refers to the inarguably real experience of the color red, regardless of whether that experience is associated with the excitation of physical matter producing visible-wavelength electromagnetism in our physical eyeballs, or whether that experience is purely in our imagination. If we want to say that even imagination is surely the product of physical activity in the brain, we can make that assumption of physicalism, but now we have two completely different sources of ‘red’. They are so mechanically different, and the conversion of either one of the sources into ‘experienced red’ is so poorly understood, that all that physicalism can offer is that somehow there must be some mathematical similarity between the visible EM in the eyeball and the invisible neurochemistry scattered in many different regions of the brain which will eventually account for their apparent unity. We do not seem to be able to define a difference between red that is seen in a dream and red that is seen through our eyes, and we also are not able to define how either a brain or photon produces that quale of experienced red. The hard problem of consciousness is to imagine a reason why any such thing as experienced red exists at all, when all physical evidence points only to biochemical changes which are not red.
  • The nature of information, physical matter, and qualia.
    • Now that we have separated qualia (aesthetic-participatory presentations) from matter (scientific concept of concrete structures in public space), we can move on to understanding information. This is a very controversial subject, made more controversial by the fact that many people do not think it is controversial. There is a popular view that information is physically real, and will cite factual relationships with concepts of physical theory such as entropy. To make it more confusing, there is a separate concept of information entropy, based on the work of engineers like Claude Shannon who studied communication. Depending on how you look at it, information entropy and thermodynamic entropy can be equivalent or opposite.
    • In any case, the concept of entropy seems to blur together the behavior of physical structures and the perception of groups of structures and appearances into ‘systems’. This whole area is like intellectual quicksand, and getting ourselves out of it requires a very disciplined effort to separate different levels of sensation, perception, ‘figuration’ or identification, attention, and understanding. Because of my experience of having learned to read English as a child, I no longer have access to the raw sensation or perception level of English writing. I can’t look at these shapes on my screen and not see Latin characters and English words. Even upside down, I am still ‘informed’ by the training of my perception to read English. This would not be the case for someone who had never read English, however most adults on Earth would be able to identify the look of them as words in the English language, even though they can’t read or pronounce them. Anyone who does read English could at least try to phonetically sound out other European languages, but they may not be able to even attempt that for other languages that don’t use the Latin alphabet.
    • All of this to say that there may be no such thing as information ‘out there’. The degree to which we are ‘informed’ is limited by our capacities for both sensing and making sense. There may be no such thing as a ‘pattern’ which is separate from a conscious experience in which an aesthetic presentation is recognized as a pattern. This was a heavy revelation for me, and one which transformed my view of nature from an essentially computationalist/physicalist framework based on pattern to one based on an aesthetic-participatory framework in which nature is made of a kind of universal ‘qualia’.
    • If my view is on the right track, information does not produce qualia at all, rather information is one minimalist presentation of qualia which is perceived as having a quality of potentially ‘re-presenting’ another conscious experience. This too is a major revelation, since if true, it means that machines like computers don’t actually compute. They don’t actually input, output, or store numbers, they just serve as a physical mechanism which we use to modify our own conscious experience in a very precisely controlled way. If we unplug our monitors, nothing changes as far as the computer is concerned. If we are playing a game, the computer will continue to execute the program in total darkness. We could even plug in some kind of audio device instead of a video screen and now hear a cacophony of noises that doesn’t resemble a game at all. The information is the same from the computer’s point of view, but the change in the aesthetic presentation has made that information inaccessible to us. My hypothesis then is that perceptual access precedes information. If information is a “difference that makes a difference” then perception is the “afferent” phenomena which have to be available for an “efferent” act of comparison and recognition as “different”.
  • The assumption of emergent properties.
    • The idea that the integration of information produces qualia such as sights, sounds, and feelings depends on the idea of emergence. This idea is, in turn, is based our correlation between our conscious experience and the behavior of a brain. We have to be convinced that our conscious experience is generated by the physical matter of the brain. This alone provides us with the need to resort to a strong emergence theory of consciousness simply being a thing that brains do, or that biology does, or that complex, information integrating physical structures of any sort do (as in IIT).
    • Balanced against that is the increasing number of anomalies that suggest that the brain, while clearly having a role in how human and animal consciousness is made available, may not be a generator of consciousness. It may be the case that our particular sort of consciousness has conditioned us to prioritize the tangible, visible aspects of our experience as being the most real, but there is no logical, objective reason to assume that is true. It may be that physics and information ‘emerge’ from the way a complex conscious experience interacts with other concurrent experiences on vastly different scales. Trying to build a simulation of a brain and expecting a personal conscious experience to emerge from it may be as misguided as building a special boat to try to sail down an impossible canal in an Escher drawing.

 

Can Effort Be Simulated?

January 12, 2019 Leave a comment

This may seem like an odd question, but I think that it is a great one if you’re thinking about AI and the hard problem of consciousness.

Let’s say I want my dishwasher to feel the sense of effort that I feel when I wash dishes. How would I do it? It could make groaning noises or seem to procrastinate by refusing to turn on for days on end, but this would be completely pointless from a practical perspective and it would only seem like effort in my imagination. In reality, any machine can be made to perform any function that it is able to do for as long as the physical parts hold up without any effort on anyone’s part. That’s why they are machines. That’s why we replace human labor with robot labor…because it’s not really labor at all.

It is very popular to think of human beings as a kind of machine and the brain as a kind of computer, but imagine if that were really true. You could wash dishes for your entire lifetime and do nothing else. If someone wanted a house, you could simply build it for them. Machines are useful precisely because they don’t have to try to do anything. They have no sense of effort. They don’t care what they do or don’t do.

You might say, “There’s nothing special about that. Biological organisms just evolved to have this sense of effort to model physiological limits.” Ok, but what possible value would that have to survival? Under what circumstances would it serve an organism to work less than the maximum that it could physiologically? Any consideration such as conserving energy for the Winter would naturally be rolled into the maximum allowed by the regulatory systems of the body.

So, I say no. Effort cannot be simulated. Effort is not equal to energy or time. It is a feeling which is so powerful that it dictates everything that we are able to do and unable to do. Effort is a telltale sign of consciousness. If we could sleep while we do the dishes, we would, because we would not have to feel the discomfort of expending effort to do it.

Any computer, AI, or robot that would be useful to us could not possibly have a sense of its own efforts as being difficult. Once we understand how a sense of effort is truly antithetical to machine behaviors, perhaps we can then begin to see why consciousness in general cannot be simulated. How would an AI that has no sense of not wanting to do the dishes every be able to truly understand what activities are pleasurable and what are painful?

Perverting a Survey of AI Theories

January 11, 2019 Leave a comment

In this post, I shamelessly cannibalize, invert, and repurpose a great diagram of contemporary AI theory categories from here.

msr_antiai

Joscha Bach: We need to understand the nature of AI to understand who we are – Part 2

December 17, 2018 1 comment

This is the second part of my comments on Nikola Danaylov’s interview of Joscha Bach: https://www.singularityweblog.com/joscha-bach/

My commentary on the first hour is here. Please watch or listen to the podcast as there is a lot that is omitted and paraphrased in this post. It’s a very fast paced, high-density conversation, and I would recommend listening to the interview in chunks and following along here for my comments if you’re interested.

JB_Part2

1:00:00 – 1:10:00

JB – Conscious attention in a sense is the ability to make indexed memories that I can later recall. I also store the expected result and the triggering condition. When do I expect the result to be visible? Later I have feedback about whether the decision was good or not. I compare result I expected with the result that I got and I can undo the decision that I made back then. I can change the model or reinforce it. I think that this is the primary mode of learning that we use, beyond just associative learning.

JB – 1:01:00 Consciousness means that you will remember what you had attended to. You have this protocol of ‘attention’. The memory of the binding state itself, the memory of being in that binding state where you have this observation that combines as many perceptual features as possible into a single function. The memory of that is phenomenal experience. The act of recalling this from the protocol is Access Consciousness. You need to train the attentional system so it knows where you store your backend cognitive architecture. This is recursive access to the attentional protocol, you remember when you make the recall. You don’t do this all the time, only when you want to train this. This is reflexive consciousness. It’s the memory of the access.

CW – By that definition, I would ask if consciousness couldn’t exist just as well without any phenomenal qualities at all. It is easy to justify consciousness as a function after the fact, but I think that this seduces us into thinking that something impossible can become possible just because it could provide some functionality. To say that phenomenal experience is a memory of a function that combines perceptual features is to presume that there would be some way for a computer program to access its RAM as perceptual features rather than as the (invisible, unperceived) states of the RAM hardware itself.

JB – Then there is another thing, the self. The self is a model of what it would be like to be a person. The brain is not a person. The brain cannot feel anything, it’s a physical system. Neurons cannot feel anything, they’re just little molecular machines with a Turing machine inside of them. They cannot even approximate arbitrary function, except by evolution, which takes a very long time. What do we do if you are a brain that figures out that it would be very useful to know what it is like to be a person? It makes one. It makes a simulation of a person, a simulacrum to be more clear. A simulation basically is isomorphic in the behavior of a person, and that thing is pretending to be a person, it’s a story about a person. You and me are persons, we are selves. We are stories in a movie that the brain is creating. We are characters in that movie. The movie is a complete simulation, a VR that is running in the neocortex.

You and me are characters in this VR. In that character, the brain writes our experiences, so we *feel* what it’s like to be exposed to the reward function. We feel what it’s like to be in our universe. We don’t feel that we are a story because that is not very useful knowledge to have. Some people figure it out and they depersonalize. They start identifying with the mind itself or lose all identification. That doesn’t seem to be a useful condition. The brain is normally set up so that the self thinks that its real, and gets access to the language center, and we can talk to each other, and here we are. The self is the thing that thinks that it remembers the contents of its attention. This is why we are conscious. Some people think that a simulation cannot be conscious, only a physical system can, but they’ve got it completely backwards. A physical system cannot be conscious, only a simulation can be conscious. Consciousness is a simulated property of a simulated self.

CW – To say “The self is a model of what it would be like to be a person” seems to be circular reasoning. The self is already what it is like to be a person. If it were a model, then it would be a model of what it’s like to be a computer program with recursively binding (binding) states. Then the question becomes, why would such a model have any “what it’s like to be” properties at all? Until we can explain exactly how and why a phenomenal property is an improvement over the absence of a phenomenal property for a machine, there’s a big problem with assuming the role of consciousness or self as ‘model’ for unconscious mechanisms and conditions. Biological machines don’t need to model, they just need to behave in the ways that tend toward survival and reproduction.

(JB) “The brain is not a person. The brain cannot feel anything, it’s a physical system. Neurons cannot feel anything, they’re just little molecular machines with a Turing machine inside of them”.

CW – I agree with this, to the extent that I agree that if there were any such thing as *purely* physical structures, they would not feel anything, and they would just be tangible geometric objects in public space. I think that rather than physical activity somehow leading to emergent non-physical ‘feelings’ it makes more sense to me that physics is made of “feelings” which are so distant and different from our own that they are rendered tangible geometric objects. It could be that physical structures appear in these limited modes of touch perception rather than in their own native spectrum of experience because that are much slower/faster and older than our own.

To say that neurons or brains feel would be, in my view, a category error since feeling is not something that a shape can logically do, just by Occam’s Razor, and if we are being literal, neurons and brains are nothing but three-dimensional shapes. The only powers that a shape could logically have are geometric powers. We know from analyzing our dreams that a feeling can be symbolized as a seemingly solid object or a place, but a purely geometric cell or organ would have no way to access symbols unless consciousness and symbols are assumed in the first place.

If a brain has the power to symbolize things, then we shouldn’t call it physical. The brain does a lot of physical things but if we can’t look into the tissue of the brain and see some physical site of translation from organic chemistry into something else, then we should not assume that such a transduction is physical. The same goes for computation. If we don’t find a logical function that changes algorithms into phenomenal presentations then we should not assume that such a transduction is computational.

(JB) “What do we do if you are a brain that figures out that it would be very useful to know what it is like to be a person? It makes one. It makes a simulation of a person, a simulacrum to be more clear.”

CW – Here also the reasoning seems circular. Useful to know what? “What it is like” doesn’t have to mean anything to a machine or program. To me this is like saying that a self-driving car would find it useful to create a dashboard and pretend that it is driven by a person using that dashboard rather than being driven directly by the algorithms that would be used to produce the dashboard.

(JB) “A simulation basically is isomorphic in the behavior of a person, and that thing is pretending to be a person, it’s a story about a person. You and me are persons, we are selves. We are stories in a movie that the brain is creating.”

CW – I have thought of it that way, but now I think that it makes more sense if we see both the brain and the person as parts of a movie that is branching off from a larger movie. I propose that timescale differentiation is the primary mechanism of this branching, although timescale differentiation is only one sort of perceptual lensing that allows experiences to include and exclude each other.

I think that we might be experiential fragments of an eternal experience, and a brain is a kind of icon that represents part of the story of that fragmentation. The brain is a process made of other processes, which are all experiences that have been perceptually lensed by the senses of touch and sight to appear as tangible and visible shapes.

The brain has no mechanical reason to make movies, it just has to control the behavior of a body in such a way that repeats behaviors which have happened to coincide with bodies surviving and reproducing. I can think of some good reasons why a universe which is an eternal experience would want to dream up bodies and brains, but once I plug up all of the philosophical leaks of circular reasoning and begging the question, I can think of no plausible reason why an unconscious body or brain would or could dream.

All of the reasons that I have ever heard arise as post hoc justifications that betray an unscientific bias toward mechanism. In a way, the idea of mechanism as omnipotent is even more bizarre than the idea of an omnipotent deity, since the whole point of a mechanistic view of nature is to replace undefined omnipotence with robustly defined, rationally explained parts and powers. If we are just going to say that emergent phenomenal magic happens once the number of shapes or data relations is so large that we don’t want to deny any power to it, we are really just reinventing religious faith in an inverted form. It is to say that sufficiently complex computations transcend computation for reasons that transcend computation.

(JB) “The movie is a complete simulation, a VR that is running in the neocortex.”

CW – We have the experience of playing computer games using a video screen, so we conflate a computer program with a video screen’s ability to render visible shapes. In fact, it is our perceptual relationship with a video screen that doing the most critical part of the simulating. The computer by itself, without any device that can produce visible color and contrast, would not fool anyone. There’s no parsimonious or plausible way to justify giving the physical states of a computing machine aesthetic qualities unless we are expecting aesthetic qualities from the start. In that case, there is no honest way to call them mere computers.

(JB) “In that character, the brain writes our experiences, so we *feel* what it’s like to be exposed to the reward function. We feel what it’s like to be in our universe.”

Computer programs don’t need desires or rewards though. Programs are simply executed by physical force. Algorithms don’t need to serve a purpose, nor do they need to be enticed to serve a purpose. There’s no plausible, parsimonious reason for the brain to write its predictive algorithms or meta-algorithms as anything like a ‘feeling’ or sensation. All that is needed for a brain is to store some algorithmically compressed copy of its own brain state history. It wouldn’t need to “feel” or feel “what it’s like”, or feel what it’s like to “be in a universe”. These are all concepts that we’re smuggling in, post hoc, from our personal experience of feeling what it’s like to be in a universe.

(JB)” We don’t feel that we are a story because that is not very useful knowledge to have. Some people figure it out and they depersonalize. They start identifying with the mind itself or lose all identification.”

It’s easy to say that it’s not very useful knowledge if it doesn’t fit our theory, but we need to test for that bias scientifically. It might just be that people depersonalize or have negative results to the idea that they don’t really exist because it is false, and false in a way that is profoundly important. We may be as real as anything ever could be, and there may be no ‘simulation’ except via the power of imagination to make believe.

(JB) “The self is the thing that thinks that it remembers the contents of its attention. This is why we are conscious.”

CW – I don’t see a logical need for that. Attention need not logically facilitate any phenomenal properties. Attention can just as easily be purely behavioral, as can ‘memory’, or ‘models’. A mechanism can be triggered by groups of mechanisms acting simultaneously without any kind of semantic link defining one mechanism as a model for something else. Think of it this way: What if we wanted to build an AI without ANY phenomenal experience? We could build a social chameleon machine, a sociopath with no model of self at all, but instead a set of reflex behaviors that mimic those of others which are deemed to be useful for a given social transaction.

(JB) “A physical system cannot be conscious, only a simulation can be conscious.”

CW – I agree this is an improvement over the idea that physical systems are conscious. What would it mean for a ‘simulation’ to exist in the absence of consciousness though? A simulation implies some conscious audience which participates in believing or suspending disbelief in the reality of what is being presented. How would it be possible for a program to simulate part of itself as something other than another (invisible, unconscious) program?

(JB) “Consciousness is a simulated property of a simulated self.”

I turn that around 180 degrees. Consciousness is the sole absolutely authentic property. It is the base level sanity and sense that is required for all sense-making to function on top of. The self is the ‘skin in the game’ – the amplification of consciousness via the almost-absolutely realistic presentation of mortality.

KD – So in a way, Daniel Dennett is correct?

JB – Yes,[…] but the problem is that the things that he says are not wrong, but they are also not non-obvious. It’s valuable because there are no good or bad ideas. It’s a good idea if you comprehend it and it elevates your current understanding. In a way, ideas come in tiers. The value of an idea for the audience is if it’s a half tier above the audience. You and me have an illusion that we find objectively good ideas, because we work at the edge of our own understanding, but we cannot really appreciate ideas that are a couple of tiers above our own ideas. One tier is a new audience, two tiers means that we don’t understand the relevance of these ideas because we don’t have the ideas that we need to appreciate the new ideas. An idea appears to be great to us when we can stand right in its foothills and look at it. It doesn’t look great anymore when we stand on the peak of another idea and look down and realize the previous idea was just the foothills to that idea.

KD – Discusses the problems with the commercialization of academia and the negative effects it has on philosophy.

JB – Most of us never learn what it really means to understand, largely because our teachers don’t. There are two types of learning. One is you generalize over past examples, and we call that stereotyping if we’re in a bad mood. The other tells us how to generalize, and this is indoctrination. The problem with indoctrination is that it might break the chain of trust. If someone doesn’t check the epistemology of the people that came before them, and take their word as authority, that’s a big difficulty.

CW – I like the ideas of tiers because it confirms my suspicion that my ideas are two or three tiers above everyone else’s. That’s why y’all don’t get my stuff…I’m too far ahead of where you’re coming from. 🙂

1:07:00 Discussion about Ray Kurzweil, the difficulty in predicting timeline for AI, confidence, evidence, outdated claims and beliefs etc.

1:19        JB – The first stage of AI: Finding things that require intelligence to do, like playing chess and then implementing it as an algorithm. Manually engineering strategies for being intelligent in different domains. Didn’t scale up to General Intelligence

We’re now in the second phase of AI, building algorithms to discover algorithms. We build learning systems that approximate functions. He thinks deep learning should be called compositional function approximation. Using networks of many functions instead of tuning single regressions.

There could be a third phase of AI where we build meta-learning algorithms. Maybe our brains are meta-learning machines, not just learning stuff but learning ways of discovering how to learn stuff (for a new domain). At some point there will be no more phases and science will effectively end because there will be a general theory for global optimization with finite resources and all science will use that algorithm.

CW – I think that the more experience we gain with AI, the more we will see that it is limited in ways that we have not anticipated, and also that it is powerful in ways that we have not anticipated. I think that we will learn that intelligence as we know it cannot be simulated, however, in trying to simulate it, we will have developed something powerful, new, and interesting in its impersonal orthogonality to personal consciousness. The revolution may not be about the rise of computers becoming like people but of a rise in appreciation for the quality and richness of personal conscious experience in contrast to the impersonal services and simulations that AI delivers.

1:23        KD – Where does ethics fit, or does it?

JB – Ethics is often misunderstood. It’s not about being good or emulating a good person. Ethics emerges when you conceptualize the world as different agents, and yourself as one of them, and you share purposes with the other agents but you have conflicts of interest. If you think that you don’t share purposes with the other agents, if you’re just a lone wolf, and the others are your prey, there’s no reason for ethics – you only look for the consequences of your actions for yourself with respect for your own reward functions. It’s not ethics though – not a shared system of negotiation because only you matter, because you don’t share a purpose with the others.

KD – It’s not shared but it’s your personal ethical framework, isn’t it?

JB – It has to be personal. I decided not to eat meat because I felt that I shared a purpose with animal; the avoidance of suffering. I also realized that it is not mutual. Cows don’t care about my suffering. They don’t think about it a lot. I had to think about the suffering of cows so I decided to stop eating meat. That was an ethical decision. It’s a decision about how to resolve conflicts of interest under conditions of shared purpose. I think this is what ethics is about. It’s a rational process in which you negotiate with yourself and with others, the resolution of conflicts of interest under contexts of shared purpose. I can make decisions about what purposes we share. Some of them are sustainable and others are not – they lead to different outcomes. In a sense, ethics requires that you conceptualize yourself as something above the organism; that you identify with the systems of meanings above yourself so that you can share a purpose. Love is the discovery of shared purpose. There needs to be somebody you can love that you can be ethical with. At some level you need to love them. You need to share a purpose with them. Then you negotiate, you don’t want them all to fail in all regards, and yourself. This is what ethics is about. It’s computational too. Machines can be ethical if they share a purpose with us.

KD – Other considerations: Perhaps ethics can be a framework within which two entities that do not share interests can negotiate in and peacefully coexist, while still not sharing interests.

JB – Not interests but purposes. If you don’t share purposes then you are defecting against your own interests when you don’t act on your own interest. It doesn’t have integrity. You don’t share a purpose with your food, other than that you want it to be nice and edible. You don’t fall in love with your food, it doesn’t end well.

CW – I see this as a kind of game-theoretic view of ethics…which I think is itself (unintentionally) unethical  I think it is true as far as it goes, but it makes assumptions about reality that are ultimately inaccurate as they begin by defining reality in the terms of a game. I think this automatically elevates the intellectual function and its objectivizing/controlling agendas at the expense of the aesthetic/empathetic priorities. What if reality is not a game? What if the goal is not to win by being a winner but to improve the quality of experience for everyone and to discover and create new ways of doing that?

Going back to JB’s initial comment that ethics are not about being good or emulating a good person, I’m not sure about that. I suspect that many people, especially children will be ethically shaped by encounters with someone, perhaps in the family or a character in a movie who appeals to them and who inspires imitation. Whether their appeal is as a saint or a sinner, something about their style, the way they communicate or demonstrate courage may align the personal consciousness with transpersonal ‘systems of meanings above’ themselves. It could be a negative example which someone encounters also. Someone that you hate who inspires you to embody the diametrically opposite aesthetics and ideals.

I don’t think that machines can be ethical or unethical, not because I think humans are special or better than machines, but out of simple parsimony. Machines don’t need ethics. They perform tasks, not for their own purposes, or for any purpose, but because we have used natural forces and properties to perform actions that satisfy our purposes. Try as we might (and I’m not even sure why we would want to try), I do not think that we will succeed in changing matter or computation into something which both can be controlled by us and which can generate its own purposes. I could be wrong, but I think this is a better reason to be skeptical of AI than any reason that computation gives us to be skeptical of consciousness. It also seems to me that the aesthetic power of a special person who exemplifies a particular set of ethics can be taken to be a symptom of a larger, absolute aesthetic power in divinity or in something like absolute truth. This doesn’t seem to fit the model of ethics as a game-theoretic strategy.

JB – Discussion about eating meat, offers example pro-argument that it could be said that a pasture raised cow could have a net positive life experience since they would not exist but for being raised as food. Their lives are good for them except for the last day, which is horrible, but usually horrible for everyone. Should we change ourselves or change cattle to make the situation more bearable? We don’t want to look at it because it is un-aesthetic. Ethics in a way is difficult.

KD – That’s the key point of ethics. It requires sometimes we make choices that are not in our own best interests perhaps.

JB – Depends what we define ourself. We could say that self is identical to the well being of the organism, but this is a very short-sighted perspective. I don’t actually identify all the way with my organism. There are other things – I identify with society, my kids, my relationships, my friends, their well being. I am all the things that I identify with and want to regulate in a particular way. My children are objectively more important than me. If I have to make a choice whether my kids survive or myself, my kids should survive. This is as it should be if nature has wired me up correctly. You can change the wiring, but this is also the weird thing about ethics. Ethics becomes very tricky to discuss once the reward function becomes mutable. When you are able to change what is important to you, what you care about, how do you define ethics?

CW – And yet, the reward function is mutable in many ways. Our experience in growing up seems to be marked by a changing appreciation for different kinds of things, even in deriving reward from controlling one’s own appetite for reward. The only constant that I see is in phenomenal experience itself. No matter how hedonistic or ascetic, how eternalist or existential, reward is defined by an expectation for a desired experience. If there is no experience that is promised, then there is no function for the concept of reward. Even in acts of self-sacrifice, we imagine that our action is justified by some improved experience for those who will survive after us.

KD – I think you can call it a code of conduct or a set of principles and rules that guide my behavior to accomplish certain kinds of outcomes.

JB – There are no beliefs without priors. What are the priors that you base your code of conduct on?

KD – The priors or axioms are things like diminishing suffering or taking an outside/universal view. When it comes to (me not eating meat), I take a view that is hopefully outside of me and the cows. I’m able to look at the suffering of eating a cow and their suffering of being eaten. If my prior is ‘minimize suffering’, because my test criteria of a sentient being is ‘can it suffer?’ , then minimizing suffering must be my guiding principle in how I relate to another entity. Basically, everything builds up from there.

JB – The most important part of becoming an adult is taking charge of your own emotions – realize that your emotions are generated by your own brain/organism, and that they are here to serve you. You’re not here to serve your emotions. They are here to help you do the things that you consider to be the right things. That means that you need to be able to control them, to have integrity. If you are just a victim of your emotions, and not do the things that you know are the right things, you don’t have integrity. What is suffering? Pain is the result of some part of your brain sending a teaching signal to another part of your brain to improve its performance. If the regulation is not correct, because you cannot actually regulate that particular thing, the pain signal will usually endure and increase until your brain figures it out and turns off the brain signaling center, because it’s not helping. In a sense suffering is a lack of integrity. The difficulty is only that many beings cannot get to the degree of integrity that they can control the application of learning signals in their brain…control the way that their reward function is computed and distributed.

CW – My criticism is the same as in the other examples. There’s no logical need for a program or machine to invent ‘pain’ or any other signal to train or teach. If there is a program to run an animal’s body, the program need only execute those functions which meet the criteria of the program. There’s no way for a machine to be punished or rewarded because there’s no reason for it to care about what it is doing. If anything, caring would impede optimal function. If a brain doesn’t need to feel to learn, then why would a brain’s simulation need to feel to learn?

KD – According to your view, suffering is a simulation or part of a simulation.

JB – Everything that we experience is a simulation. We are a simulation. To us it feels real. There is no getting around this. I have learned in my life that all of my suffering is a result of not being awake. Once I wake up, I realize what’s going on. I realize that I am a mind. The relevance of the signals that I perceive is completely up to the mind. The universe does not give me objectively good or bad things. The universe gives me a bunch of electrical impulses that manifest in my thalamus, and my brain makes sense of them by creating a simulated world. The valence in that simulated world is completely internal – it’s completely part of that world, it’s not objective…and I can control this.

KD – So you are saying suffering is subjective?
JB – Suffering is real to the self with respect to ethics, but it is not immutable. You can change the definition of your self, the things that you identify with. We don’t have to suffer about things, political situations for example, if we recognize them to be mechanical processes that happen regardless of how we feel about them.

CW – The problem with the idea of simulation is that we are picking and choosing which features of our experience are more isomorphic to what we assume is an unsimulated reality. Such an assumption is invariably a product of our biases. If we say that the world we experience is a simulation running on a brain, why not also say that the brain is also a simulation running on something else? Why not say that our experiences of success with manipulating our own experience of suffering is as much of a simulation as the original suffering was? At some point, something has to genuinely sense something. We should not assume that just because our perception can be manipulated we have used manipulation to escape from perception. We may perceive that we have escaped one level of perception, or objectified it, but this too must be presumed to be part of the simulation as well. Perception can only seem to have been escaped in another perception. The primacy of experience is always conserved.

I think that it is the intellect that is over-valuing the significance of ‘real’ because of its role in protecting the ego and the physical body from harm, but outside of this evolutionary warping, there is no reason to suspect that the universe distinguishes in an absolute sense between ‘real’ and ‘unreal’. There are presentations – sights, sounds, thoughts, feelings, objects, concepts, etc, but the realism of those presentations can only be made of the same types of perceptions. We see this in dreams, with false awakenings etc. Our dream has no problem with spontaneously confabulating experiences of waking up into ‘reality’. This is not to discount the authenticity of waking up in ‘actual reality’, only to say that if we can tell that it authentic, then it necessarily means that our experience is not detached from reality completely and is not meaningfully described as a simulation. There are some recent studies that suggest that our perception may be much closer to ‘reality’ than we thought, i.e. that we can train ourselves to perceive quantum level changes.

If that holds up, we need to re-think the idea that it would make sense for a bio-computer to model or simulate a phenomenal reality that is so isomorphic and redundant to the unperceived reality. There’s not much point in a 1 to 1 scale model. Why not just put the visible photons inside the visual cortex in exactly the field that we see? I think that something else is going on. There may not be a simulation, only a perceptual lensing between many different concurrent layers of experience – not a dualism or dual-aspect monism, but a variable aspect monism. We happen to be a very, very complex experience which includes the capacity to perceive aspects of its own perception in an indirect or involuted rendering.

KD – Stoic philosophy says that we suffer not from events or things that happen in our lives, but from the stories that we attach to them. If you change the story, you can change the way you feel about them and reduce suffering. Let go of things we can’t really control, body, health, etc. The only thing you can completely control is your thoughts. That’s where your freedom and power come to be. In that mind, in that simulation, you’re the God.

JB – This ability to make your thoughts more truthful, this is Western enlightenment in a way is aufklärung in German. There is also this other sense of enlightenment, erleuchtung that you have in a spiritual context. So aufklärung fixes your rationality and erleuchtung fixes your motivation. It fixes what’s relevant to you and your relationship between self and the universe.  Often they are seen as mutually exclusive, in the sense thataufklärung leads to nihilism, because you don’t give up your need for meaning, you just prove that it cannot be satisfied. God does not exist in any way that can set you free. In this other sense, you give up your understanding of how the world actually works so that you can be happy. You go down to a state where all people share the same cosmic consciousness, which is complete bullshit, right? But it’s something that removes the illusion of separation and the suffering that comes with the separation. It’s unsustainable.

CW – This duality of aufklärung and erleuchtung I see as another expression of the polarity of the universal continuum of consciousness. Consciousness vs machine, East vs West, Wisdom vs Intelligence. I see both extremes as having pathological tendencies. The Western extreme is cynical, nihilistic, and rigid. The Eastern extreme is naïve, impractical, and delusional. Cosmic consciousness or God does not have to be complete bullshit, but it can be a hint of ways to align ourselves and bring about more positive future experiences, both personally and or transpersonally.

Basically, I think that both the brain and the dreamer of the brain are themselves part of a larger dream that may or may not be like a dreamer. It may be that these possibilities are in participatory superposition, like an ambiguous image, so that what we choose to invest our attention in can actually bias experienced outcomes toward a teleological or non-teleological absolute. Maybe our efforts to could result in the opposite effect also, or some combination of the two. If the universe consists of dreams and dreamed dreamers, then it is possible for our personal experience to include a destiny where we believe one thing about the final dream and find out we were wrong, or right, or wrong then right then wrong again, etc. forever.

KD – Where does that leave us with respect to ethics though? Did you dismantle my ethics, the suffering test?

JB – Yeah, it’s not good. The ethic of eliminating suffering leads us to eliminating all life eventually. Anti-natalism – stop bringing organisms into the world to eliminate suffering, end the lives of those organisms that are already here as painlessly as possible, is this what you want?

KD – (No) So what’s your ethics?

JB – Existence is basically neutral. Why are there so few stoics around? It seems so obvious – only worry about things to the extent that worrying helps you change them…so why is almost nobody a Stoic?

KD – There are some Stoics and they are very inspirational.

JB – I suspect that Stoicism is maladaptive. Most cats I have known are Stoics. If you leave them alone, they’re fine. Their baseline state is ok, they are ok with themselves and their place in the universe, and they just stay in that place. If they are hungry or want to play, they will do the minimum that they have to do to get back into their equilibrium. Human beings are different. When they get up in the morning they’re not completely fine. They need to be busy during the day, but in the evening they feel fine. In the evening they have done enough to make peace with their existence again. They can have a beer and be with their friends and everything is good. Then there are some individuals which have so much discontent within themselves that they can’t take care of it in a single day. From an evolutionary perspective, you can see how this would be adaptive for a group oriented species. Cats are not group oriented. For them, it’s rational to be a Stoic. If you are a group animal, it makes sense for individuals to overextend themselves for the good of the group – to generate a surplus of resources for the group.

CW – I don’t know if we can generalize about humans that way. Some people are more like cats. I will say that I think it is possible to become attached to non-attachment. The stoic may learn to disassociate from the suffering of life, but this too can become a crutch or ‘spiritual bypass’.

 KD – But evolution also diversifies things. Evolution hedges its bets by creating diversity, so some individuals will be more adaptive to some situations than others.

JB – That may not be true. In larger habitats we don’t find more species in them. Competition is more fierce. We reduce the number of species dramatically. We are probably eventually going to look like a meteor as far as obliterating species on this planet.

KD – So what does that mean for ethics in technology? What’s the solution? Is there room for ethics in technology?

JB – Of course. It’s about discovering the long game. You have to look at the long term influences and you also have to question why you think it’s the right thing to do, what the results of that are, which gets tricky.

CW – I think that all that we can do is to experiment and be open to the possibilities that our experiments themselves may be right or wrong. There may be no way of letting ourselves off the hook here. We have to play the game as players with skin in the game, not as safe observers studying only those rules that we have invested in already.

KD – We can agree on that, but how do you define ethics yourself?

JB – There are some people in AI who think that ethics are a way for politically savvy people to get power over STEM people…and with considerable success. It’s largely a protection racket. Ethical studies are relatable and so make a big splash, but it would rarely happen that a self-driving car would have to make those decisions. My best answer of how I define ethics myself is that it is the principled negotiation of conflicts of interest under conditions of shared purpose. When I look at other people, I mostly imagine myself as being them in a different timeline. Everyone is in a way me on a different timeline, but in order to understand them I need to flip a number of bits. These bits are the conditions of negotiation that I have with you.

KD – Where to cows fit in? We don’t have a shared purpose with them. Can you have shared purpose with respect to the cows then?

JB – The shared purpose doesn’t objectively exist. You basically project a shared meaning above the level of the ego. The ego is the function that integrates expected rewards over the next fifty years.

KD – That’s what Peter Singer calls the Universe point of view, perhaps.

JB – If you can go to this Eternalist perspective where you integrate expected reward from here to infinity, most of that being outside of the universe, this leads to very weird things. Most of my friends are Eternalists. All these Romantic Russian Jews, they are like that, in a way. This Eastern European shape of the soul. It creates something like a conspiracy, it creates a tribe, and its very useful for corporations. Shared meaning is a very important thing for a corporation that is not transactional. But there is a certain kind of illusion in it. To me, meaning is like the Ring of Mordor.  If you drop the ring, you will lose the brotherhood of the ring and you will lose your mission. You have to carry it, but very lightly. If you put it on, you will get super powers but you get corrupted because there is no meaning. You get drawn into a cult that you create…and I don’t want to do that…because it’s going to shackle my mind in ways that I don’t want it to be bound.

CW – I agree it is important not to get drawn into a cult that we create, however, what I have found is that the drive to negate superstition tends toward its own cult of ‘substitution’. Rather than the universe being a divine conspiracy, the physical universe is completely innocent of any deception, except somehow for our conscious experience, which is completely deceptive, even to the point of pretending to exist. How can there be a thing which is so unreal that it is not even a thing, and yet come from a universe that is completely real and only does real things?

 KD – I really like that way of seeing but I’m trying to extrapolate from your definition of ethics a guide of how we can treat the cows and hopefully how the AIs can treat us.

JB – I think that some people have this idea that is similar to Asimov, that at some point the Roombas will become larger and more powerful so that we can make them washing machines, or let them do our shopping, or nursing…that we will still enslave them but negotiate conditions of co-existence. I think that what is going to happen instead is that corporations, which are already intelligent agents that just happen to borrow human intelligence, automate their decision making. At the moment, a human being can often outsmart a corporation, because the corporation has so much time in between updating its Excel spreadsheets and the next weekly meetings. Imagine it automates and weekly meetings take place every millisecond, and the thing becomes sentient and understands its role in the world, and the nature of physics and everything else. We will not be able to outsmart that anymore, and well will not live next to it, we will live inside of it. AI will come from top down on us. We will be its gut flora. The question is how we can negotiate that it doesn’t get the idea to use antibiotics, because we’re actually not good for anything.

KD – Exactly. And why wouldn’t they do that?

JB – I don’t see why.

CW – The other possibility is that AI will not develop its own agendas or true intelligence. That doesn’t mean our AI won’t be dangerous, I just suspect that the danger will come from our misinterpreting the authority of a simulated intelligence rather than from a genuine mechanical sentience.

KD – Is there an ethics that could guide them to treat us just like you decided to treat the cows when you decided not to eat meat?

JB – Probably no way to guarantee all AIs would treat us kindly. If we used the axiom of reducing suffering to build an AI that will be around for 10,000 years and keep us around too, it will probably kill 90% of the people painlessly and breed the rest into some kind of harmless yeast. This is not what you want, even though it would be consistent with your stated axioms. It would also open a Pandora’s Box to wake up as many people as possible so that they will be able to learn how to stop their suffering.

KD – Wrapping up

JB – Discusses book he’s writing about how AI has discovered ways of understanding the self and consciousness which we did not have 100 years ago. The nature of meaning, how we actually work, etc. The field of AI is largely misunderstood. It is different from the hype, largely is in a way, statistics on steroids. It’s identifying new functions to model reality. It’s largely experimental and has not gotten to the state where it can offer proofs of optimality.  It can do things in ways that are much better than the established rules of statisticians. There is also going to be a convergence between econometrics, causal dependency analysis, and AI, and statistics.  It’s all going to be the same in a particular way, because there’s only so many ways that you can make mathematics about reality. We confuse this with the idea of what a mind is. They’re closely related. I think that our brain contains an AI that is making a model of reality and a model of a person in reality, and this particular solution of what a particular AI can do in the modeling space is what we are. So in a way we need to understand the nature of AI, which I think is the nature of sufficiently general function approximation, maybe all the truth that can be found by an embedded observer, in particular kinds of universes that have the power to create it. This could be the question of what AI is about, how modeling works in general. For us the relevance of AI is how does it explain who we are. I don’t think there is anything else that can.

CW – I agree that AI development is the next necessary step to understanding ourselves, but I think that we will be surprised to find that General Intelligence cannot be simulated and that this will lead us to ask the deeper questions about authenticity and irreducibly aesthetic properties.

KD – So by creating AI, we can perhaps understand the AI that is already in our brain.

JB – We already do. Minsky and many others who have contributed to this field are already better ideas than anything that we had 200 years ago. We could only develop many of these ideas because we began to understand the nature of modeling – the status of reality.

The nature of our relationship to the outside world. We started out with this dualistic intuition in our culture, that there is a thinking substance (Res Cogitans) and an extended substance (Res Extensa)…stuff in space universe and a universe of ideas. We now realize that they both exist, but they both exist within the mind. We understand that everything perceptual gets mapped to a region in three space, but we also understand that physics is not a three space, it’s something else entirely. The three space exists only as a potential of electromagnetic interactions at a certain order of magnitude above the Planck length where we are entangled with the universe. This is what we model, and this looks three dimensional to us.

CW – I am sympathetic to this view, however, I suggest an entirely different possibility. Rather than invoking a dualism of existing in the universe and existing ‘in the mind’, I see that existence itself is an irreducibly perceptual-participatory phenomenon. Our sense of dualism may actually reveal more insights into our deeper reality than those insights which assume that tangible objects and information exist beyond all perception. The more we understand about things like quantum contextuality and relativity, I think the more we have to let go of the compulsion to label things that are inconvenient to explain as illusions. I see Res Cogitans and Res Extensa as opposite poles of a Res Aesthetica continuum which is absolute and eternal. It is through the modulation of aesthetic lensing that the continuum is diffracted into various modalities of sense experience. The cogitans of software and the extensa of hardware can never meet except through the mid-range spectrum of perception. It is from that fertile center, I suspect, that most of the novelty and richness of the universe is generated, not from sterile algorithms or game-theoretic statistics on the continuum’s lensed peripheries.

Everything else we come up with that cannot be mapped to three space is Res Cogitans. If we transfer this dualism into a single mind then we have the idealistic monism that we have in various spiritual teachings – this idea that there is no physical reality, that we live in a dream. We are characters dreamed by a mind on a higher plane of existence and that’s why miracles are possible. Then there is this Western perspective of a mechanical universe. It’s entirely mechanical, there’s no conspiracy going on. Now we understand that these things are not in opposition, they’re complements. We actually do live in a dream but the dream is generated by our neocortex. Our brain is not a machine that can give us access to reality as it is, because that’s not possible for a system that is only measuring a few bits at a systemic interface. There are no colors and sounds on Earth. We already know that.

CW – Why stop at colors and sounds though? How can we arbitrarily say that there is an Earth or a brain when we know that it is only a world simulated by some kind of code. If we unravel ourselves into evolution, why not keep going and unravel evolution as well? Maybe colors and sounds are a more insightful and true reflection of what nature is made of than the blind measurements that we take second hand through physical instruments? It seems clear to me that this is a bias which has not yet properly appreciated the hints of relativity and quantum contextuality. If we say that physics has no frame of reference, then we have to understand that we may be making up an artificial frame of reference that seems to us like no frame of reference. If we live in a dream, then so does the neocortex. Maybe they are different dreams, but there is no sound scientific reason to privilege every dream in the universe except our own as real.

The sounds and colors are generated as a dream inside your brain. The same circuits that make dreams during the night make dreams during the day. This is in a way our inner reality that’s being created on a brain. The mind on a higher plane of existence exists, it’s a brain of a primate that’s made of cells and lives in a mechanical physical universe. Magic is possible because you can edit your memories. You can make that simulation anything that you want it to be. Many of these changes are not sustainable, which is why the sages warn against using magic(k), because if down the line, if you change your reward function, bad things may happen. You cannot break the bank.

KD – To simplify all of this, we need to understand the nature of AI to understand ourselves.

JB – Yeah, well, I would say that AI is the field that took up the slack after psychology failed as a science. Psychology got terrified of overfitting, so it stopped making theories of the mind as a whole, it restricted itself to theories with very few free parameters so it could test them. Even those didn’t replicate, as we know now. After Piaget, psychology largely didn’t go anywhere, in my perspective. It might be too harsh because I see it from the outside, and outsiders of AI might argue that AI didn’t go very far, and as an insider I’m more partial here.

CW – It seems to me that psychology ran up against a barrier that is analogous to Gödel’s incompleteness. To go on trying to objectify subjectivity necessarily brings into question the tools of formalism themselves. I think that it may have been that transpersonal psychology had come too far too fast, and that there is still more to be done for the rest of our scientific establishment to catch up. Popular society is literally not yet sane enough to handle a deep understanding of sanity.

KD – I have this metaphor that I use every once in a while, saying that technology is a magnifying mirror. It doesn’t have an essence of its own but it reflects the essences that we put in it. It’s not a perfect image because it magnifies and amplifies things. That seems to go well with the idea that we have to understand the nature of AI to understand who we are.

JB – The practice of AI is 90% automation of statistics and making better statistics that run automatically on machines. It just so happens that this is largely co-extensional with what minds do. It also so happens that AI was founded by people like Minsky who had fundamental questions about reality.

KD – And what’s the last 10%?

JB – The rest is people come up with dreams about our relationship to reality, using our concepts that we develop in AI. We identify models that we can apply in other fields. It’s the deeper insights. It’s why we do it – to understand. It’s to make philosophy better. Society still needs a few of us to think about the deep questions, and we are still here, and the coffee is good.

CW – Thanks for taking the time to put out quality discussions like this. I agree that technology is a neutral reflector/magnifier of what we put into it, but I think that part of what we have to confront as individuals and as a society is that neutrality may not be enough. We may now have to decide whether we will make a stand for authentic feeling and significance or to rely on technology which does not feel or understand significance to make that decision for us.

Shé Art

The Art of Shé D'Montford

astrobutterfly.wordpress.com/

Transform your life with Astrology

Be Inspired..!!

Listen to your inner self..it has all the answers..

Rain Coast Review

Thoughts on life... by Donald B. Wilson

Perfect Chaos

The Blog of Author Steven Colborne

Amecylia

Multimedia Project: Mettā Programming DNA

SHINE OF A LUCID BEING

Astral Lucid Music - Philosophy On Life, The Universe And Everything...

I can't believe it!

Problems of today, Ideas for tomorrow

Rationalising The Universe

one post at a time

Conscience and Consciousness

Academic Philosophy for a General Audience

yhousenyc.wordpress.com/

Exploring the Origins and Nature of Awareness

DNA OF GOD

BRAINSTORM- An Evolving and propitious Synergy Mode~!

Paul's Bench

Ruminations on philosophy, psychology, life

This is not Yet-Another-Paradox, This is just How-Things-Really-Are...

For all dangerous minds, your own, or ours, but not the tv shows'... ... ... ... ... ... ... How to hack human consciousness, How to defend against human-hackers, and anything in between... ... ... ... ... ...this may be regarded as a sort of dialogue for peace and plenty for a hungry planet, with no one left behind, ever... ... ... ... please note: It may behoove you more to try to prove to yourselves how we may really be a time-traveler, than to try to disprove it... ... ... ... ... ... ...Enjoy!

Creativity✒📃😍✌

“Don’t try to be different. Just be Creative. To be creative is different enough.”

Political Joint

A political blog centralized on current events

zumpoems

Zumwalt Poems Online

dhamma footsteps

postcards from the present moment