De-Simulating Natural Intelligence

May 24, 2019 Leave a comment

Hi friends! I’m getting ready for my poster presentation at the Science of Consciousness conference in Interlaken:

Abstract In recent years, scientific and popular imagination has been captured by the idea that what we experience directly is a neuro-computational simulation. At the same time, there is a contradictory idea that some things that we experience, such as the existence of brains and computers, are real enough to allow us to create fully conscious and intelligent devices. This presentation will try to explain where this logic breaks down, why true intelligence may never be generated artificially, and why that is good news. Recent studies have suggested that human perception is not as limited as previously thought and that while machines can do many things better than we can, becoming conscious may not be one of them. The approach taken here can be described as a Variable Aspect Monism or Multisense Realism, and it seeks to clarify the relationship between physical form, logical function, and aesthetic participation.

In Natural Intelligence, intelligence is abstracted from within a full spectrum of aesthetically rich experience that developed over billions of years of evolving sensation and participation.

In Artificial “Intelligence”, intelligence is abstracted from outside the natural, presumably narrow range of barely aesthetic experience that has remained relatively unchanged over human timescales (but has changed over geological timescales, evolving, presumably, very different aesthetics).

In Natural Intelligence, intelligence is abstracted from within a full spectrum of aesthetically rich experience that developed over billions of years of evolving sensation and participation.

In Artificial “Intelligence”, intelligence is abstracted from outside the natural, presumably narrow range of barely aesthetic experience that has remained relatively unchanged over human timescales (but has changed over geological timescales, evolving, presumably, very different aesthetics).

What Multisense Realism proposes is more pansensitivity than panpsychism.

The standard notion of panpsychism is what I would call ‘promiscuous panpsychism’, meaning that every atom has to be ‘conscious’ in a kind of thinking, understanding way. I think that this promiscuity is what makes panpsychism unappealing to many/most people.

Under pansensitivity, intelligence 𝒅𝒊𝒗𝒆𝒓𝒈𝒆𝒔 from a totalistic absolute, diffracting through calibrated degrees of added insensitivity. It’s like in school when kids draw a colorful picture and then cover it with black crayon (the pre-big bang) and then begin to scratch it off to reveal the colors underneath. The black crayon is entropy, the scratching is negentropy, and the size of the revealed image is the degree of aesthetic saturation.

So yes, the physical substances that we use to build machines are forms of conscious experience, but they are very low level, low aesthetics which don’t necessarily scale up on their own (since they have not evolved over billions of years of natural experience by themselves).

I think that despite our success in putting our own high level aesthetic experience into code that we use to manipulate hardware, it is still only reflecting of our own natural ‘psychism’ back to us, rather than truly exporting it into the machine hardware.

Holosense Model

February 6, 2019 Leave a comment

2holosensemodel

3holosensemodel.jpgVersion 2 better or worse?

Sense and Simulation

February 5, 2019 2 comments

Kneeyo

1. Nothing that can be experienced is a simulation.

There are different levels of perception (experiences of experience) and interpretation (experiences of understanding perceptions), and they can spoof each other, but all experiences are as fundamentally real any physical substance or process.

If you look in at a mirror, you are *really* seeing a *real* image, it’s just that your body isn’t really inside of a mirror. Your physical body can’t actually be seen, it can only be touched and felt. What can be seen is an image (made of color contrast shapes) that reflects both low-level tangible-public and high-level intangible-psychological conditions.

2. The Hard Problem of Consciousness can be reduced to this question: “How can a particle, force or field become sensitive?“

I think that the answer is that it cannot. Rather, we have to invert our Western presumptions about nature and understand that fields and forces are concepts that may need be replaced by a more accurate one: direct sensory-perceptive and motive-participatory phenomena – aka nested conscious experiences.
Particles are the way that the division and polarization of experience is rendered in the tangible-tactile modality of sensory-perception.

They are not sensitive, and no structure composed of particles is sensitive, just as no words made of letters generate meaning. The particles and structures, words and letters are literally place-holders…spatiotemporally anchored addresses through which experiences can be organized in increasingly complex, rich, and meaningful ways. This is what nature and the universe is: An anti-mechanical sensory experience of mechanically divided experiencers…an aesthetic holos that renders its self-diffraction through anesthetic holography.

Can Qualia Be Simulated?

January 19, 2019 4 comments

My response to this Quora question:

The Integrated Information Theory claims, that a computer simulation of a brain would produce the same behaviour, but wouldn’t have any qualia. If qualia don’t make any difference, does it mean, they don’t exist? Is it a contradiction?

There are several considerations upon which the answer to this question hinges:

  • The nature of simulation and behavior.
    1. The term simulation is an informal one. I don’t place a high value on discussing the definition of words, but I think that it is essential that if we are talking about something that exists in the world, we have to understand what that thing is supposed to be. I would say that the contemporary sense of ‘simulation’ goes back to early applications of computer software, specifically Flight Simulator programs. We have since become accustomed to using video ‘simulations’ of everything from fighting on a battlefield to performing surgery. Does it make sense to ask whether a flight simulator is producing the same behavior as an airplane? If it did, would we say that the program had produced a flight from Rome to New York? If the flight simulator crashed, would we have to have a funeral for the simulated passengers? I would say no. Common sense would tell us that the simulation is just software…the airplane isn’t real. This takes us to the next consideration, what is real?
    2. The term real is an informal one as well. We talk about ‘reality’ but that can refer to some abstract truth that we seem to agree on or to a concrete world that we seem to share. To understand why there may be an important difference between a simulation and the ‘real thing’ that is being simulated, we should approach it in a more rigorous way. Flying a real airplane involves tons of physical matter, as well as countless causal links to the world/universe. The real airplane is the result of billions of years of accumulated change in the physical universe, as well as the evolution of numerous species and societies to engineer flight. There is a common comparison of the flight of an airplane to the flight of a bird or insect, where we are meant to think of both types of physical acts as ‘flying’, even though that flight is accomplished in quite different ways. I think that this comparison, however, is misleading. I would look to the famous quote by Alfred Korzybski, “The map is not the territory” instead when relating to simulating consciousness. Whether it is a literal geographical map or some other piece of graphic ‘art’ that ‘maps’ to a potentially real (in the concrete, worldly sense) place, the idea is that just because something appears visually similar to us does not mean that there is any other deep connection between the two. I’m not a photograph of my face. I’m not even a video of myself talking. This understanding is also expressed in the famous Magritte painting “The Treachery of Images”.
  • The nature of qualia.
    • Properly understood, what the term ‘qualia’ refers to exists by definition. It can get a little mystical if we rely on descriptions of qualia such as “what X is like” or “what it is like to feel X”, so I think it adds clarity if we look at it this way: Qualia is what is experienced. Information is a concept. Matter is a concept. Concepts are experienced also, but what the concept of matter refers to should/must be divided into the idea of matter as defined by the Standard Model (which has to do with exotic elementary “particles/waves” such as bosons and fermions which make up slightly less exotic atoms). Physical matter is made of atoms on the periodic table.
    • What we experience directly is not physical matter. What we experience are aesthetic presentations with tactile/tangible qualities such as shape, position, weight, texture, etc. We can dream of worlds filled with tangible objects, and we can interact with them as if they were physical matter, but these dream objects are not composed of the elements on the periodic table. The question of whether these objects are real depends on whether we are able to wake up from the dream. If we do not ever awaken from a dream, I don’t see any way of evaluating the realism of the contents of the dream. To the contrary, when we do awaken from a dream, we are often puzzled by our acceptance of dream conditions which seem clearly absurd and impossible.
    • That fact is very important in my view, as it tells us that either it is impossible to ever know whether anything we are experiencing is real, or it tells us that if we can know reality when we truly experience it, then experience must be anchored to reality in way that is deeper than the contents of what is experienced. In other words, if I can’t tell that I’m dreaming when the pink elephant offers me a cigarette, and if I can have dreams which include false awakenings, then I can’t logically ever know that I’m not dreaming. If, however, actual awakening is unmistakable as it seems, then there must be some capacity of our consciousness to know reality that extends beyond any sort of empirical symptom or logical deduction.
    • Qualia then, refers to the inarguably real experience of the color red, regardless of whether that experience is associated with the excitation of physical matter producing visible-wavelength electromagnetism in our physical eyeballs, or whether that experience is purely in our imagination. If we want to say that even imagination is surely the product of physical activity in the brain, we can make that assumption of physicalism, but now we have two completely different sources of ‘red’. They are so mechanically different, and the conversion of either one of the sources into ‘experienced red’ is so poorly understood, that all that physicalism can offer is that somehow there must be some mathematical similarity between the visible EM in the eyeball and the invisible neurochemistry scattered in many different regions of the brain which will eventually account for their apparent unity. We do not seem to be able to define a difference between red that is seen in a dream and red that is seen through our eyes, and we also are not able to define how either a brain or photon produces that quale of experienced red. The hard problem of consciousness is to imagine a reason why any such thing as experienced red exists at all, when all physical evidence points only to biochemical changes which are not red.
  • The nature of information, physical matter, and qualia.
    • Now that we have separated qualia (aesthetic-participatory presentations) from matter (scientific concept of concrete structures in public space), we can move on to understanding information. This is a very controversial subject, made more controversial by the fact that many people do not think it is controversial. There is a popular view that information is physically real, and will cite factual relationships with concepts of physical theory such as entropy. To make it more confusing, there is a separate concept of information entropy, based on the work of engineers like Claude Shannon who studied communication. Depending on how you look at it, information entropy and thermodynamic entropy can be equivalent or opposite.
    • In any case, the concept of entropy seems to blur together the behavior of physical structures and the perception of groups of structures and appearances into ‘systems’. This whole area is like intellectual quicksand, and getting ourselves out of it requires a very disciplined effort to separate different levels of sensation, perception, ‘figuration’ or identification, attention, and understanding. Because of my experience of having learned to read English as a child, I no longer have access to the raw sensation or perception level of English writing. I can’t look at these shapes on my screen and not see Latin characters and English words. Even upside down, I am still ‘informed’ by the training of my perception to read English. This would not be the case for someone who had never read English, however most adults on Earth would be able to identify the look of them as words in the English language, even though they can’t read or pronounce them. Anyone who does read English could at least try to phonetically sound out other European languages, but they may not be able to even attempt that for other languages that don’t use the Latin alphabet.
    • All of this to say that there may be no such thing as information ‘out there’. The degree to which we are ‘informed’ is limited by our capacities for both sensing and making sense. There may be no such thing as a ‘pattern’ which is separate from a conscious experience in which an aesthetic presentation is recognized as a pattern. This was a heavy revelation for me, and one which transformed my view of nature from an essentially computationalist/physicalist framework based on pattern to one based on an aesthetic-participatory framework in which nature is made of a kind of universal ‘qualia’.
    • If my view is on the right track, information does not produce qualia at all, rather information is one minimalist presentation of qualia which is perceived as having a quality of potentially ‘re-presenting’ another conscious experience. This too is a major revelation, since if true, it means that machines like computers don’t actually compute. They don’t actually input, output, or store numbers, they just serve as a physical mechanism which we use to modify our own conscious experience in a very precisely controlled way. If we unplug our monitors, nothing changes as far as the computer is concerned. If we are playing a game, the computer will continue to execute the program in total darkness. We could even plug in some kind of audio device instead of a video screen and now hear a cacophony of noises that doesn’t resemble a game at all. The information is the same from the computer’s point of view, but the change in the aesthetic presentation has made that information inaccessible to us. My hypothesis then is that perceptual access precedes information. If information is a “difference that makes a difference” then perception is the “afferent” phenomena which have to be available for an “efferent” act of comparison and recognition as “different”.
  • The assumption of emergent properties.
    • The idea that the integration of information produces qualia such as sights, sounds, and feelings depends on the idea of emergence. This idea is, in turn, is based our correlation between our conscious experience and the behavior of a brain. We have to be convinced that our conscious experience is generated by the physical matter of the brain. This alone provides us with the need to resort to a strong emergence theory of consciousness simply being a thing that brains do, or that biology does, or that complex, information integrating physical structures of any sort do (as in IIT).
    • Balanced against that is the increasing number of anomalies that suggest that the brain, while clearly having a role in how human and animal consciousness is made available, may not be a generator of consciousness. It may be the case that our particular sort of consciousness has conditioned us to prioritize the tangible, visible aspects of our experience as being the most real, but there is no logical, objective reason to assume that is true. It may be that physics and information ‘emerge’ from the way a complex conscious experience interacts with other concurrent experiences on vastly different scales. Trying to build a simulation of a brain and expecting a personal conscious experience to emerge from it may be as misguided as building a special boat to try to sail down an impossible canal in an Escher drawing.

 

Can Effort Be Simulated?

January 12, 2019 Leave a comment

This may seem like an odd question, but I think that it is a great one if you’re thinking about AI and the hard problem of consciousness.

Let’s say I want my dishwasher to feel the sense of effort that I feel when I wash dishes. How would I do it? It could make groaning noises or seem to procrastinate by refusing to turn on for days on end, but this would be completely pointless from a practical perspective and it would only seem like effort in my imagination. In reality, any machine can be made to perform any function that it is able to do for as long as the physical parts hold up without any effort on anyone’s part. That’s why they are machines. That’s why we replace human labor with robot labor…because it’s not really labor at all.

It is very popular to think of human beings as a kind of machine and the brain as a kind of computer, but imagine if that were really true. You could wash dishes for your entire lifetime and do nothing else. If someone wanted a house, you could simply build it for them. Machines are useful precisely because they don’t have to try to do anything. They have no sense of effort. They don’t care what they do or don’t do.

You might say, “There’s nothing special about that. Biological organisms just evolved to have this sense of effort to model physiological limits.” Ok, but what possible value would that have to survival? Under what circumstances would it serve an organism to work less than the maximum that it could physiologically? Any consideration such as conserving energy for the Winter would naturally be rolled into the maximum allowed by the regulatory systems of the body.

So, I say no. Effort cannot be simulated. Effort is not equal to energy or time. It is a feeling which is so powerful that it dictates everything that we are able to do and unable to do. Effort is a telltale sign of consciousness. If we could sleep while we do the dishes, we would, because we would not have to feel the discomfort of expending effort to do it.

Any computer, AI, or robot that would be useful to us could not possibly have a sense of its own efforts as being difficult. Once we understand how a sense of effort is truly antithetical to machine behaviors, perhaps we can then begin to see why consciousness in general cannot be simulated. How would an AI that has no sense of not wanting to do the dishes every be able to truly understand what activities are pleasurable and what are painful?

Perverting a Survey of AI Theories

January 11, 2019 Leave a comment

In this post, I shamelessly cannibalize, invert, and repurpose a great diagram of contemporary AI theory categories from here.

msr_antiai

Joscha Bach: We need to understand the nature of AI to understand who we are – Part 2

December 17, 2018 1 comment

This is the second part of my comments on Nikola Danaylov’s interview of Joscha Bach: https://www.singularityweblog.com/joscha-bach/

My commentary on the first hour is here. Please watch or listen to the podcast as there is a lot that is omitted and paraphrased in this post. It’s a very fast paced, high-density conversation, and I would recommend listening to the interview in chunks and following along here for my comments if you’re interested.

JB_Part2

1:00:00 – 1:10:00

JB – Conscious attention in a sense is the ability to make indexed memories that I can later recall. I also store the expected result and the triggering condition. When do I expect the result to be visible? Later I have feedback about whether the decision was good or not. I compare result I expected with the result that I got and I can undo the decision that I made back then. I can change the model or reinforce it. I think that this is the primary mode of learning that we use, beyond just associative learning.

JB – 1:01:00 Consciousness means that you will remember what you had attended to. You have this protocol of ‘attention’. The memory of the binding state itself, the memory of being in that binding state where you have this observation that combines as many perceptual features as possible into a single function. The memory of that is phenomenal experience. The act of recalling this from the protocol is Access Consciousness. You need to train the attentional system so it knows where you store your backend cognitive architecture. This is recursive access to the attentional protocol, you remember when you make the recall. You don’t do this all the time, only when you want to train this. This is reflexive consciousness. It’s the memory of the access.

CW – By that definition, I would ask if consciousness couldn’t exist just as well without any phenomenal qualities at all. It is easy to justify consciousness as a function after the fact, but I think that this seduces us into thinking that something impossible can become possible just because it could provide some functionality. To say that phenomenal experience is a memory of a function that combines perceptual features is to presume that there would be some way for a computer program to access its RAM as perceptual features rather than as the (invisible, unperceived) states of the RAM hardware itself.

JB – Then there is another thing, the self. The self is a model of what it would be like to be a person. The brain is not a person. The brain cannot feel anything, it’s a physical system. Neurons cannot feel anything, they’re just little molecular machines with a Turing machine inside of them. They cannot even approximate arbitrary function, except by evolution, which takes a very long time. What do we do if you are a brain that figures out that it would be very useful to know what it is like to be a person? It makes one. It makes a simulation of a person, a simulacrum to be more clear. A simulation basically is isomorphic in the behavior of a person, and that thing is pretending to be a person, it’s a story about a person. You and me are persons, we are selves. We are stories in a movie that the brain is creating. We are characters in that movie. The movie is a complete simulation, a VR that is running in the neocortex.

You and me are characters in this VR. In that character, the brain writes our experiences, so we *feel* what it’s like to be exposed to the reward function. We feel what it’s like to be in our universe. We don’t feel that we are a story because that is not very useful knowledge to have. Some people figure it out and they depersonalize. They start identifying with the mind itself or lose all identification. That doesn’t seem to be a useful condition. The brain is normally set up so that the self thinks that its real, and gets access to the language center, and we can talk to each other, and here we are. The self is the thing that thinks that it remembers the contents of its attention. This is why we are conscious. Some people think that a simulation cannot be conscious, only a physical system can, but they’ve got it completely backwards. A physical system cannot be conscious, only a simulation can be conscious. Consciousness is a simulated property of a simulated self.

CW – To say “The self is a model of what it would be like to be a person” seems to be circular reasoning. The self is already what it is like to be a person. If it were a model, then it would be a model of what it’s like to be a computer program with recursively binding (binding) states. Then the question becomes, why would such a model have any “what it’s like to be” properties at all? Until we can explain exactly how and why a phenomenal property is an improvement over the absence of a phenomenal property for a machine, there’s a big problem with assuming the role of consciousness or self as ‘model’ for unconscious mechanisms and conditions. Biological machines don’t need to model, they just need to behave in the ways that tend toward survival and reproduction.

(JB) “The brain is not a person. The brain cannot feel anything, it’s a physical system. Neurons cannot feel anything, they’re just little molecular machines with a Turing machine inside of them”.

CW – I agree with this, to the extent that I agree that if there were any such thing as *purely* physical structures, they would not feel anything, and they would just be tangible geometric objects in public space. I think that rather than physical activity somehow leading to emergent non-physical ‘feelings’ it makes more sense to me that physics is made of “feelings” which are so distant and different from our own that they are rendered tangible geometric objects. It could be that physical structures appear in these limited modes of touch perception rather than in their own native spectrum of experience because that are much slower/faster and older than our own.

To say that neurons or brains feel would be, in my view, a category error since feeling is not something that a shape can logically do, just by Occam’s Razor, and if we are being literal, neurons and brains are nothing but three-dimensional shapes. The only powers that a shape could logically have are geometric powers. We know from analyzing our dreams that a feeling can be symbolized as a seemingly solid object or a place, but a purely geometric cell or organ would have no way to access symbols unless consciousness and symbols are assumed in the first place.

If a brain has the power to symbolize things, then we shouldn’t call it physical. The brain does a lot of physical things but if we can’t look into the tissue of the brain and see some physical site of translation from organic chemistry into something else, then we should not assume that such a transduction is physical. The same goes for computation. If we don’t find a logical function that changes algorithms into phenomenal presentations then we should not assume that such a transduction is computational.

(JB) “What do we do if you are a brain that figures out that it would be very useful to know what it is like to be a person? It makes one. It makes a simulation of a person, a simulacrum to be more clear.”

CW – Here also the reasoning seems circular. Useful to know what? “What it is like” doesn’t have to mean anything to a machine or program. To me this is like saying that a self-driving car would find it useful to create a dashboard and pretend that it is driven by a person using that dashboard rather than being driven directly by the algorithms that would be used to produce the dashboard.

(JB) “A simulation basically is isomorphic in the behavior of a person, and that thing is pretending to be a person, it’s a story about a person. You and me are persons, we are selves. We are stories in a movie that the brain is creating.”

CW – I have thought of it that way, but now I think that it makes more sense if we see both the brain and the person as parts of a movie that is branching off from a larger movie. I propose that timescale differentiation is the primary mechanism of this branching, although timescale differentiation is only one sort of perceptual lensing that allows experiences to include and exclude each other.

I think that we might be experiential fragments of an eternal experience, and a brain is a kind of icon that represents part of the story of that fragmentation. The brain is a process made of other processes, which are all experiences that have been perceptually lensed by the senses of touch and sight to appear as tangible and visible shapes.

The brain has no mechanical reason to make movies, it just has to control the behavior of a body in such a way that repeats behaviors which have happened to coincide with bodies surviving and reproducing. I can think of some good reasons why a universe which is an eternal experience would want to dream up bodies and brains, but once I plug up all of the philosophical leaks of circular reasoning and begging the question, I can think of no plausible reason why an unconscious body or brain would or could dream.

All of the reasons that I have ever heard arise as post hoc justifications that betray an unscientific bias toward mechanism. In a way, the idea of mechanism as omnipotent is even more bizarre than the idea of an omnipotent deity, since the whole point of a mechanistic view of nature is to replace undefined omnipotence with robustly defined, rationally explained parts and powers. If we are just going to say that emergent phenomenal magic happens once the number of shapes or data relations is so large that we don’t want to deny any power to it, we are really just reinventing religious faith in an inverted form. It is to say that sufficiently complex computations transcend computation for reasons that transcend computation.

(JB) “The movie is a complete simulation, a VR that is running in the neocortex.”

CW – We have the experience of playing computer games using a video screen, so we conflate a computer program with a video screen’s ability to render visible shapes. In fact, it is our perceptual relationship with a video screen that doing the most critical part of the simulating. The computer by itself, without any device that can produce visible color and contrast, would not fool anyone. There’s no parsimonious or plausible way to justify giving the physical states of a computing machine aesthetic qualities unless we are expecting aesthetic qualities from the start. In that case, there is no honest way to call them mere computers.

(JB) “In that character, the brain writes our experiences, so we *feel* what it’s like to be exposed to the reward function. We feel what it’s like to be in our universe.”

Computer programs don’t need desires or rewards though. Programs are simply executed by physical force. Algorithms don’t need to serve a purpose, nor do they need to be enticed to serve a purpose. There’s no plausible, parsimonious reason for the brain to write its predictive algorithms or meta-algorithms as anything like a ‘feeling’ or sensation. All that is needed for a brain is to store some algorithmically compressed copy of its own brain state history. It wouldn’t need to “feel” or feel “what it’s like”, or feel what it’s like to “be in a universe”. These are all concepts that we’re smuggling in, post hoc, from our personal experience of feeling what it’s like to be in a universe.

(JB)” We don’t feel that we are a story because that is not very useful knowledge to have. Some people figure it out and they depersonalize. They start identifying with the mind itself or lose all identification.”

It’s easy to say that it’s not very useful knowledge if it doesn’t fit our theory, but we need to test for that bias scientifically. It might just be that people depersonalize or have negative results to the idea that they don’t really exist because it is false, and false in a way that is profoundly important. We may be as real as anything ever could be, and there may be no ‘simulation’ except via the power of imagination to make believe.

(JB) “The self is the thing that thinks that it remembers the contents of its attention. This is why we are conscious.”

CW – I don’t see a logical need for that. Attention need not logically facilitate any phenomenal properties. Attention can just as easily be purely behavioral, as can ‘memory’, or ‘models’. A mechanism can be triggered by groups of mechanisms acting simultaneously without any kind of semantic link defining one mechanism as a model for something else. Think of it this way: What if we wanted to build an AI without ANY phenomenal experience? We could build a social chameleon machine, a sociopath with no model of self at all, but instead a set of reflex behaviors that mimic those of others which are deemed to be useful for a given social transaction.

(JB) “A physical system cannot be conscious, only a simulation can be conscious.”

CW – I agree this is an improvement over the idea that physical systems are conscious. What would it mean for a ‘simulation’ to exist in the absence of consciousness though? A simulation implies some conscious audience which participates in believing or suspending disbelief in the reality of what is being presented. How would it be possible for a program to simulate part of itself as something other than another (invisible, unconscious) program?

(JB) “Consciousness is a simulated property of a simulated self.”

I turn that around 180 degrees. Consciousness is the sole absolutely authentic property. It is the base level sanity and sense that is required for all sense-making to function on top of. The self is the ‘skin in the game’ – the amplification of consciousness via the almost-absolutely realistic presentation of mortality.

KD – So in a way, Daniel Dennett is correct?

JB – Yes,[…] but the problem is that the things that he says are not wrong, but they are also not non-obvious. It’s valuable because there are no good or bad ideas. It’s a good idea if you comprehend it and it elevates your current understanding. In a way, ideas come in tiers. The value of an idea for the audience is if it’s a half tier above the audience. You and me have an illusion that we find objectively good ideas, because we work at the edge of our own understanding, but we cannot really appreciate ideas that are a couple of tiers above our own ideas. One tier is a new audience, two tiers means that we don’t understand the relevance of these ideas because we don’t have the ideas that we need to appreciate the new ideas. An idea appears to be great to us when we can stand right in its foothills and look at it. It doesn’t look great anymore when we stand on the peak of another idea and look down and realize the previous idea was just the foothills to that idea.

KD – Discusses the problems with the commercialization of academia and the negative effects it has on philosophy.

JB – Most of us never learn what it really means to understand, largely because our teachers don’t. There are two types of learning. One is you generalize over past examples, and we call that stereotyping if we’re in a bad mood. The other tells us how to generalize, and this is indoctrination. The problem with indoctrination is that it might break the chain of trust. If someone doesn’t check the epistemology of the people that came before them, and take their word as authority, that’s a big difficulty.

CW – I like the ideas of tiers because it confirms my suspicion that my ideas are two or three tiers above everyone else’s. That’s why y’all don’t get my stuff…I’m too far ahead of where you’re coming from. 🙂

1:07:00 Discussion about Ray Kurzweil, the difficulty in predicting timeline for AI, confidence, evidence, outdated claims and beliefs etc.

1:19        JB – The first stage of AI: Finding things that require intelligence to do, like playing chess and then implementing it as an algorithm. Manually engineering strategies for being intelligent in different domains. Didn’t scale up to General Intelligence

We’re now in the second phase of AI, building algorithms to discover algorithms. We build learning systems that approximate functions. He thinks deep learning should be called compositional function approximation. Using networks of many functions instead of tuning single regressions.

There could be a third phase of AI where we build meta-learning algorithms. Maybe our brains are meta-learning machines, not just learning stuff but learning ways of discovering how to learn stuff (for a new domain). At some point there will be no more phases and science will effectively end because there will be a general theory for global optimization with finite resources and all science will use that algorithm.

CW – I think that the more experience we gain with AI, the more we will see that it is limited in ways that we have not anticipated, and also that it is powerful in ways that we have not anticipated. I think that we will learn that intelligence as we know it cannot be simulated, however, in trying to simulate it, we will have developed something powerful, new, and interesting in its impersonal orthogonality to personal consciousness. The revolution may not be about the rise of computers becoming like people but of a rise in appreciation for the quality and richness of personal conscious experience in contrast to the impersonal services and simulations that AI delivers.

1:23        KD – Where does ethics fit, or does it?

JB – Ethics is often misunderstood. It’s not about being good or emulating a good person. Ethics emerges when you conceptualize the world as different agents, and yourself as one of them, and you share purposes with the other agents but you have conflicts of interest. If you think that you don’t share purposes with the other agents, if you’re just a lone wolf, and the others are your prey, there’s no reason for ethics – you only look for the consequences of your actions for yourself with respect for your own reward functions. It’s not ethics though – not a shared system of negotiation because only you matter, because you don’t share a purpose with the others.

KD – It’s not shared but it’s your personal ethical framework, isn’t it?

JB – It has to be personal. I decided not to eat meat because I felt that I shared a purpose with animal; the avoidance of suffering. I also realized that it is not mutual. Cows don’t care about my suffering. They don’t think about it a lot. I had to think about the suffering of cows so I decided to stop eating meat. That was an ethical decision. It’s a decision about how to resolve conflicts of interest under conditions of shared purpose. I think this is what ethics is about. It’s a rational process in which you negotiate with yourself and with others, the resolution of conflicts of interest under contexts of shared purpose. I can make decisions about what purposes we share. Some of them are sustainable and others are not – they lead to different outcomes. In a sense, ethics requires that you conceptualize yourself as something above the organism; that you identify with the systems of meanings above yourself so that you can share a purpose. Love is the discovery of shared purpose. There needs to be somebody you can love that you can be ethical with. At some level you need to love them. You need to share a purpose with them. Then you negotiate, you don’t want them all to fail in all regards, and yourself. This is what ethics is about. It’s computational too. Machines can be ethical if they share a purpose with us.

KD – Other considerations: Perhaps ethics can be a framework within which two entities that do not share interests can negotiate in and peacefully coexist, while still not sharing interests.

JB – Not interests but purposes. If you don’t share purposes then you are defecting against your own interests when you don’t act on your own interest. It doesn’t have integrity. You don’t share a purpose with your food, other than that you want it to be nice and edible. You don’t fall in love with your food, it doesn’t end well.

CW – I see this as a kind of game-theoretic view of ethics…which I think is itself (unintentionally) unethical  I think it is true as far as it goes, but it makes assumptions about reality that are ultimately inaccurate as they begin by defining reality in the terms of a game. I think this automatically elevates the intellectual function and its objectivizing/controlling agendas at the expense of the aesthetic/empathetic priorities. What if reality is not a game? What if the goal is not to win by being a winner but to improve the quality of experience for everyone and to discover and create new ways of doing that?

Going back to JB’s initial comment that ethics are not about being good or emulating a good person, I’m not sure about that. I suspect that many people, especially children will be ethically shaped by encounters with someone, perhaps in the family or a character in a movie who appeals to them and who inspires imitation. Whether their appeal is as a saint or a sinner, something about their style, the way they communicate or demonstrate courage may align the personal consciousness with transpersonal ‘systems of meanings above’ themselves. It could be a negative example which someone encounters also. Someone that you hate who inspires you to embody the diametrically opposite aesthetics and ideals.

I don’t think that machines can be ethical or unethical, not because I think humans are special or better than machines, but out of simple parsimony. Machines don’t need ethics. They perform tasks, not for their own purposes, or for any purpose, but because we have used natural forces and properties to perform actions that satisfy our purposes. Try as we might (and I’m not even sure why we would want to try), I do not think that we will succeed in changing matter or computation into something which both can be controlled by us and which can generate its own purposes. I could be wrong, but I think this is a better reason to be skeptical of AI than any reason that computation gives us to be skeptical of consciousness. It also seems to me that the aesthetic power of a special person who exemplifies a particular set of ethics can be taken to be a symptom of a larger, absolute aesthetic power in divinity or in something like absolute truth. This doesn’t seem to fit the model of ethics as a game-theoretic strategy.

JB – Discussion about eating meat, offers example pro-argument that it could be said that a pasture raised cow could have a net positive life experience since they would not exist but for being raised as food. Their lives are good for them except for the last day, which is horrible, but usually horrible for everyone. Should we change ourselves or change cattle to make the situation more bearable? We don’t want to look at it because it is un-aesthetic. Ethics in a way is difficult.

KD – That’s the key point of ethics. It requires sometimes we make choices that are not in our own best interests perhaps.

JB – Depends what we define ourself. We could say that self is identical to the well being of the organism, but this is a very short-sighted perspective. I don’t actually identify all the way with my organism. There are other things – I identify with society, my kids, my relationships, my friends, their well being. I am all the things that I identify with and want to regulate in a particular way. My children are objectively more important than me. If I have to make a choice whether my kids survive or myself, my kids should survive. This is as it should be if nature has wired me up correctly. You can change the wiring, but this is also the weird thing about ethics. Ethics becomes very tricky to discuss once the reward function becomes mutable. When you are able to change what is important to you, what you care about, how do you define ethics?

CW – And yet, the reward function is mutable in many ways. Our experience in growing up seems to be marked by a changing appreciation for different kinds of things, even in deriving reward from controlling one’s own appetite for reward. The only constant that I see is in phenomenal experience itself. No matter how hedonistic or ascetic, how eternalist or existential, reward is defined by an expectation for a desired experience. If there is no experience that is promised, then there is no function for the concept of reward. Even in acts of self-sacrifice, we imagine that our action is justified by some improved experience for those who will survive after us.

KD – I think you can call it a code of conduct or a set of principles and rules that guide my behavior to accomplish certain kinds of outcomes.

JB – There are no beliefs without priors. What are the priors that you base your code of conduct on?

KD – The priors or axioms are things like diminishing suffering or taking an outside/universal view. When it comes to (me not eating meat), I take a view that is hopefully outside of me and the cows. I’m able to look at the suffering of eating a cow and their suffering of being eaten. If my prior is ‘minimize suffering’, because my test criteria of a sentient being is ‘can it suffer?’ , then minimizing suffering must be my guiding principle in how I relate to another entity. Basically, everything builds up from there.

JB – The most important part of becoming an adult is taking charge of your own emotions – realize that your emotions are generated by your own brain/organism, and that they are here to serve you. You’re not here to serve your emotions. They are here to help you do the things that you consider to be the right things. That means that you need to be able to control them, to have integrity. If you are just a victim of your emotions, and not do the things that you know are the right things, you don’t have integrity. What is suffering? Pain is the result of some part of your brain sending a teaching signal to another part of your brain to improve its performance. If the regulation is not correct, because you cannot actually regulate that particular thing, the pain signal will usually endure and increase until your brain figures it out and turns off the brain signaling center, because it’s not helping. In a sense suffering is a lack of integrity. The difficulty is only that many beings cannot get to the degree of integrity that they can control the application of learning signals in their brain…control the way that their reward function is computed and distributed.

CW – My criticism is the same as in the other examples. There’s no logical need for a program or machine to invent ‘pain’ or any other signal to train or teach. If there is a program to run an animal’s body, the program need only execute those functions which meet the criteria of the program. There’s no way for a machine to be punished or rewarded because there’s no reason for it to care about what it is doing. If anything, caring would impede optimal function. If a brain doesn’t need to feel to learn, then why would a brain’s simulation need to feel to learn?

KD – According to your view, suffering is a simulation or part of a simulation.

JB – Everything that we experience is a simulation. We are a simulation. To us it feels real. There is no getting around this. I have learned in my life that all of my suffering is a result of not being awake. Once I wake up, I realize what’s going on. I realize that I am a mind. The relevance of the signals that I perceive is completely up to the mind. The universe does not give me objectively good or bad things. The universe gives me a bunch of electrical impulses that manifest in my thalamus, and my brain makes sense of them by creating a simulated world. The valence in that simulated world is completely internal – it’s completely part of that world, it’s not objective…and I can control this.

KD – So you are saying suffering is subjective?
JB – Suffering is real to the self with respect to ethics, but it is not immutable. You can change the definition of your self, the things that you identify with. We don’t have to suffer about things, political situations for example, if we recognize them to be mechanical processes that happen regardless of how we feel about them.

CW – The problem with the idea of simulation is that we are picking and choosing which features of our experience are more isomorphic to what we assume is an unsimulated reality. Such an assumption is invariably a product of our biases. If we say that the world we experience is a simulation running on a brain, why not also say that the brain is also a simulation running on something else? Why not say that our experiences of success with manipulating our own experience of suffering is as much of a simulation as the original suffering was? At some point, something has to genuinely sense something. We should not assume that just because our perception can be manipulated we have used manipulation to escape from perception. We may perceive that we have escaped one level of perception, or objectified it, but this too must be presumed to be part of the simulation as well. Perception can only seem to have been escaped in another perception. The primacy of experience is always conserved.

I think that it is the intellect that is over-valuing the significance of ‘real’ because of its role in protecting the ego and the physical body from harm, but outside of this evolutionary warping, there is no reason to suspect that the universe distinguishes in an absolute sense between ‘real’ and ‘unreal’. There are presentations – sights, sounds, thoughts, feelings, objects, concepts, etc, but the realism of those presentations can only be made of the same types of perceptions. We see this in dreams, with false awakenings etc. Our dream has no problem with spontaneously confabulating experiences of waking up into ‘reality’. This is not to discount the authenticity of waking up in ‘actual reality’, only to say that if we can tell that it authentic, then it necessarily means that our experience is not detached from reality completely and is not meaningfully described as a simulation. There are some recent studies that suggest that our perception may be much closer to ‘reality’ than we thought, i.e. that we can train ourselves to perceive quantum level changes.

If that holds up, we need to re-think the idea that it would make sense for a bio-computer to model or simulate a phenomenal reality that is so isomorphic and redundant to the unperceived reality. There’s not much point in a 1 to 1 scale model. Why not just put the visible photons inside the visual cortex in exactly the field that we see? I think that something else is going on. There may not be a simulation, only a perceptual lensing between many different concurrent layers of experience – not a dualism or dual-aspect monism, but a variable aspect monism. We happen to be a very, very complex experience which includes the capacity to perceive aspects of its own perception in an indirect or involuted rendering.

KD – Stoic philosophy says that we suffer not from events or things that happen in our lives, but from the stories that we attach to them. If you change the story, you can change the way you feel about them and reduce suffering. Let go of things we can’t really control, body, health, etc. The only thing you can completely control is your thoughts. That’s where your freedom and power come to be. In that mind, in that simulation, you’re the God.

JB – This ability to make your thoughts more truthful, this is Western enlightenment in a way is aufklärung in German. There is also this other sense of enlightenment, erleuchtung that you have in a spiritual context. So aufklärung fixes your rationality and erleuchtung fixes your motivation. It fixes what’s relevant to you and your relationship between self and the universe.  Often they are seen as mutually exclusive, in the sense thataufklärung leads to nihilism, because you don’t give up your need for meaning, you just prove that it cannot be satisfied. God does not exist in any way that can set you free. In this other sense, you give up your understanding of how the world actually works so that you can be happy. You go down to a state where all people share the same cosmic consciousness, which is complete bullshit, right? But it’s something that removes the illusion of separation and the suffering that comes with the separation. It’s unsustainable.

CW – This duality of aufklärung and erleuchtung I see as another expression of the polarity of the universal continuum of consciousness. Consciousness vs machine, East vs West, Wisdom vs Intelligence. I see both extremes as having pathological tendencies. The Western extreme is cynical, nihilistic, and rigid. The Eastern extreme is naïve, impractical, and delusional. Cosmic consciousness or God does not have to be complete bullshit, but it can be a hint of ways to align ourselves and bring about more positive future experiences, both personally and or transpersonally.

Basically, I think that both the brain and the dreamer of the brain are themselves part of a larger dream that may or may not be like a dreamer. It may be that these possibilities are in participatory superposition, like an ambiguous image, so that what we choose to invest our attention in can actually bias experienced outcomes toward a teleological or non-teleological absolute. Maybe our efforts to could result in the opposite effect also, or some combination of the two. If the universe consists of dreams and dreamed dreamers, then it is possible for our personal experience to include a destiny where we believe one thing about the final dream and find out we were wrong, or right, or wrong then right then wrong again, etc. forever.

KD – Where does that leave us with respect to ethics though? Did you dismantle my ethics, the suffering test?

JB – Yeah, it’s not good. The ethic of eliminating suffering leads us to eliminating all life eventually. Anti-natalism – stop bringing organisms into the world to eliminate suffering, end the lives of those organisms that are already here as painlessly as possible, is this what you want?

KD – (No) So what’s your ethics?

JB – Existence is basically neutral. Why are there so few stoics around? It seems so obvious – only worry about things to the extent that worrying helps you change them…so why is almost nobody a Stoic?

KD – There are some Stoics and they are very inspirational.

JB – I suspect that Stoicism is maladaptive. Most cats I have known are Stoics. If you leave them alone, they’re fine. Their baseline state is ok, they are ok with themselves and their place in the universe, and they just stay in that place. If they are hungry or want to play, they will do the minimum that they have to do to get back into their equilibrium. Human beings are different. When they get up in the morning they’re not completely fine. They need to be busy during the day, but in the evening they feel fine. In the evening they have done enough to make peace with their existence again. They can have a beer and be with their friends and everything is good. Then there are some individuals which have so much discontent within themselves that they can’t take care of it in a single day. From an evolutionary perspective, you can see how this would be adaptive for a group oriented species. Cats are not group oriented. For them, it’s rational to be a Stoic. If you are a group animal, it makes sense for individuals to overextend themselves for the good of the group – to generate a surplus of resources for the group.

CW – I don’t know if we can generalize about humans that way. Some people are more like cats. I will say that I think it is possible to become attached to non-attachment. The stoic may learn to disassociate from the suffering of life, but this too can become a crutch or ‘spiritual bypass’.

 KD – But evolution also diversifies things. Evolution hedges its bets by creating diversity, so some individuals will be more adaptive to some situations than others.

JB – That may not be true. In larger habitats we don’t find more species in them. Competition is more fierce. We reduce the number of species dramatically. We are probably eventually going to look like a meteor as far as obliterating species on this planet.

KD – So what does that mean for ethics in technology? What’s the solution? Is there room for ethics in technology?

JB – Of course. It’s about discovering the long game. You have to look at the long term influences and you also have to question why you think it’s the right thing to do, what the results of that are, which gets tricky.

CW – I think that all that we can do is to experiment and be open to the possibilities that our experiments themselves may be right or wrong. There may be no way of letting ourselves off the hook here. We have to play the game as players with skin in the game, not as safe observers studying only those rules that we have invested in already.

KD – We can agree on that, but how do you define ethics yourself?

JB – There are some people in AI who think that ethics are a way for politically savvy people to get power over STEM people…and with considerable success. It’s largely a protection racket. Ethical studies are relatable and so make a big splash, but it would rarely happen that a self-driving car would have to make those decisions. My best answer of how I define ethics myself is that it is the principled negotiation of conflicts of interest under conditions of shared purpose. When I look at other people, I mostly imagine myself as being them in a different timeline. Everyone is in a way me on a different timeline, but in order to understand them I need to flip a number of bits. These bits are the conditions of negotiation that I have with you.

KD – Where to cows fit in? We don’t have a shared purpose with them. Can you have shared purpose with respect to the cows then?

JB – The shared purpose doesn’t objectively exist. You basically project a shared meaning above the level of the ego. The ego is the function that integrates expected rewards over the next fifty years.

KD – That’s what Peter Singer calls the Universe point of view, perhaps.

JB – If you can go to this Eternalist perspective where you integrate expected reward from here to infinity, most of that being outside of the universe, this leads to very weird things. Most of my friends are Eternalists. All these Romantic Russian Jews, they are like that, in a way. This Eastern European shape of the soul. It creates something like a conspiracy, it creates a tribe, and its very useful for corporations. Shared meaning is a very important thing for a corporation that is not transactional. But there is a certain kind of illusion in it. To me, meaning is like the Ring of Mordor.  If you drop the ring, you will lose the brotherhood of the ring and you will lose your mission. You have to carry it, but very lightly. If you put it on, you will get super powers but you get corrupted because there is no meaning. You get drawn into a cult that you create…and I don’t want to do that…because it’s going to shackle my mind in ways that I don’t want it to be bound.

CW – I agree it is important not to get drawn into a cult that we create, however, what I have found is that the drive to negate superstition tends toward its own cult of ‘substitution’. Rather than the universe being a divine conspiracy, the physical universe is completely innocent of any deception, except somehow for our conscious experience, which is completely deceptive, even to the point of pretending to exist. How can there be a thing which is so unreal that it is not even a thing, and yet come from a universe that is completely real and only does real things?

 KD – I really like that way of seeing but I’m trying to extrapolate from your definition of ethics a guide of how we can treat the cows and hopefully how the AIs can treat us.

JB – I think that some people have this idea that is similar to Asimov, that at some point the Roombas will become larger and more powerful so that we can make them washing machines, or let them do our shopping, or nursing…that we will still enslave them but negotiate conditions of co-existence. I think that what is going to happen instead is that corporations, which are already intelligent agents that just happen to borrow human intelligence, automate their decision making. At the moment, a human being can often outsmart a corporation, because the corporation has so much time in between updating its Excel spreadsheets and the next weekly meetings. Imagine it automates and weekly meetings take place every millisecond, and the thing becomes sentient and understands its role in the world, and the nature of physics and everything else. We will not be able to outsmart that anymore, and well will not live next to it, we will live inside of it. AI will come from top down on us. We will be its gut flora. The question is how we can negotiate that it doesn’t get the idea to use antibiotics, because we’re actually not good for anything.

KD – Exactly. And why wouldn’t they do that?

JB – I don’t see why.

CW – The other possibility is that AI will not develop its own agendas or true intelligence. That doesn’t mean our AI won’t be dangerous, I just suspect that the danger will come from our misinterpreting the authority of a simulated intelligence rather than from a genuine mechanical sentience.

KD – Is there an ethics that could guide them to treat us just like you decided to treat the cows when you decided not to eat meat?

JB – Probably no way to guarantee all AIs would treat us kindly. If we used the axiom of reducing suffering to build an AI that will be around for 10,000 years and keep us around too, it will probably kill 90% of the people painlessly and breed the rest into some kind of harmless yeast. This is not what you want, even though it would be consistent with your stated axioms. It would also open a Pandora’s Box to wake up as many people as possible so that they will be able to learn how to stop their suffering.

KD – Wrapping up

JB – Discusses book he’s writing about how AI has discovered ways of understanding the self and consciousness which we did not have 100 years ago. The nature of meaning, how we actually work, etc. The field of AI is largely misunderstood. It is different from the hype, largely is in a way, statistics on steroids. It’s identifying new functions to model reality. It’s largely experimental and has not gotten to the state where it can offer proofs of optimality.  It can do things in ways that are much better than the established rules of statisticians. There is also going to be a convergence between econometrics, causal dependency analysis, and AI, and statistics.  It’s all going to be the same in a particular way, because there’s only so many ways that you can make mathematics about reality. We confuse this with the idea of what a mind is. They’re closely related. I think that our brain contains an AI that is making a model of reality and a model of a person in reality, and this particular solution of what a particular AI can do in the modeling space is what we are. So in a way we need to understand the nature of AI, which I think is the nature of sufficiently general function approximation, maybe all the truth that can be found by an embedded observer, in particular kinds of universes that have the power to create it. This could be the question of what AI is about, how modeling works in general. For us the relevance of AI is how does it explain who we are. I don’t think there is anything else that can.

CW – I agree that AI development is the next necessary step to understanding ourselves, but I think that we will be surprised to find that General Intelligence cannot be simulated and that this will lead us to ask the deeper questions about authenticity and irreducibly aesthetic properties.

KD – So by creating AI, we can perhaps understand the AI that is already in our brain.

JB – We already do. Minsky and many others who have contributed to this field are already better ideas than anything that we had 200 years ago. We could only develop many of these ideas because we began to understand the nature of modeling – the status of reality.

The nature of our relationship to the outside world. We started out with this dualistic intuition in our culture, that there is a thinking substance (Res Cogitans) and an extended substance (Res Extensa)…stuff in space universe and a universe of ideas. We now realize that they both exist, but they both exist within the mind. We understand that everything perceptual gets mapped to a region in three space, but we also understand that physics is not a three space, it’s something else entirely. The three space exists only as a potential of electromagnetic interactions at a certain order of magnitude above the Planck length where we are entangled with the universe. This is what we model, and this looks three dimensional to us.

CW – I am sympathetic to this view, however, I suggest an entirely different possibility. Rather than invoking a dualism of existing in the universe and existing ‘in the mind’, I see that existence itself is an irreducibly perceptual-participatory phenomenon. Our sense of dualism may actually reveal more insights into our deeper reality than those insights which assume that tangible objects and information exist beyond all perception. The more we understand about things like quantum contextuality and relativity, I think the more we have to let go of the compulsion to label things that are inconvenient to explain as illusions. I see Res Cogitans and Res Extensa as opposite poles of a Res Aesthetica continuum which is absolute and eternal. It is through the modulation of aesthetic lensing that the continuum is diffracted into various modalities of sense experience. The cogitans of software and the extensa of hardware can never meet except through the mid-range spectrum of perception. It is from that fertile center, I suspect, that most of the novelty and richness of the universe is generated, not from sterile algorithms or game-theoretic statistics on the continuum’s lensed peripheries.

Everything else we come up with that cannot be mapped to three space is Res Cogitans. If we transfer this dualism into a single mind then we have the idealistic monism that we have in various spiritual teachings – this idea that there is no physical reality, that we live in a dream. We are characters dreamed by a mind on a higher plane of existence and that’s why miracles are possible. Then there is this Western perspective of a mechanical universe. It’s entirely mechanical, there’s no conspiracy going on. Now we understand that these things are not in opposition, they’re complements. We actually do live in a dream but the dream is generated by our neocortex. Our brain is not a machine that can give us access to reality as it is, because that’s not possible for a system that is only measuring a few bits at a systemic interface. There are no colors and sounds on Earth. We already know that.

CW – Why stop at colors and sounds though? How can we arbitrarily say that there is an Earth or a brain when we know that it is only a world simulated by some kind of code. If we unravel ourselves into evolution, why not keep going and unravel evolution as well? Maybe colors and sounds are a more insightful and true reflection of what nature is made of than the blind measurements that we take second hand through physical instruments? It seems clear to me that this is a bias which has not yet properly appreciated the hints of relativity and quantum contextuality. If we say that physics has no frame of reference, then we have to understand that we may be making up an artificial frame of reference that seems to us like no frame of reference. If we live in a dream, then so does the neocortex. Maybe they are different dreams, but there is no sound scientific reason to privilege every dream in the universe except our own as real.

The sounds and colors are generated as a dream inside your brain. The same circuits that make dreams during the night make dreams during the day. This is in a way our inner reality that’s being created on a brain. The mind on a higher plane of existence exists, it’s a brain of a primate that’s made of cells and lives in a mechanical physical universe. Magic is possible because you can edit your memories. You can make that simulation anything that you want it to be. Many of these changes are not sustainable, which is why the sages warn against using magic(k), because if down the line, if you change your reward function, bad things may happen. You cannot break the bank.

KD – To simplify all of this, we need to understand the nature of AI to understand ourselves.

JB – Yeah, well, I would say that AI is the field that took up the slack after psychology failed as a science. Psychology got terrified of overfitting, so it stopped making theories of the mind as a whole, it restricted itself to theories with very few free parameters so it could test them. Even those didn’t replicate, as we know now. After Piaget, psychology largely didn’t go anywhere, in my perspective. It might be too harsh because I see it from the outside, and outsiders of AI might argue that AI didn’t go very far, and as an insider I’m more partial here.

CW – It seems to me that psychology ran up against a barrier that is analogous to Gödel’s incompleteness. To go on trying to objectify subjectivity necessarily brings into question the tools of formalism themselves. I think that it may have been that transpersonal psychology had come too far too fast, and that there is still more to be done for the rest of our scientific establishment to catch up. Popular society is literally not yet sane enough to handle a deep understanding of sanity.

KD – I have this metaphor that I use every once in a while, saying that technology is a magnifying mirror. It doesn’t have an essence of its own but it reflects the essences that we put in it. It’s not a perfect image because it magnifies and amplifies things. That seems to go well with the idea that we have to understand the nature of AI to understand who we are.

JB – The practice of AI is 90% automation of statistics and making better statistics that run automatically on machines. It just so happens that this is largely co-extensional with what minds do. It also so happens that AI was founded by people like Minsky who had fundamental questions about reality.

KD – And what’s the last 10%?

JB – The rest is people come up with dreams about our relationship to reality, using our concepts that we develop in AI. We identify models that we can apply in other fields. It’s the deeper insights. It’s why we do it – to understand. It’s to make philosophy better. Society still needs a few of us to think about the deep questions, and we are still here, and the coffee is good.

CW – Thanks for taking the time to put out quality discussions like this. I agree that technology is a neutral reflector/magnifier of what we put into it, but I think that part of what we have to confront as individuals and as a society is that neutrality may not be enough. We may now have to decide whether we will make a stand for authentic feeling and significance or to rely on technology which does not feel or understand significance to make that decision for us.

Perfect Chaos

God's Perfect Purpose in a Chaotic World

Amecylia

Art from Chaos

Lucid Being🎋

THE STREAM OF CONSCIOUSNESS!

I can't believe it!

Problems of today, Ideas for tomorrow

Rationalising The Universe

one post at a time

Conscience and Consciousness

Academic Philosophy for a General Audience

yhousenyc.wordpress.com/

Exploring the Origins and Nature of Awareness

DNA OF GOD

BRAINSTORM- An Evolving and propitious Synergy Mode~!

Musings and Thoughts on the Universe, Personal Development and Current Topics

Copyright © 2016 by JAMES MICHAEL J. LOVELL, MUSINGS AND THOUGHTS ON THE UNIVERSE, PERSONAL DEVELOPMENT AND CURRENT TOPICS. ALL RIGHTS RESERVED. UNAUTHORIZED USE AND/OR DUPLICATION OF THIS MATERIAL WITHOUT EXPRESS AND WRITTEN PERMISSION FROM THIS SITE’S AUTHOR AND/OR OWNER IS STRICTLY PROHIBITED.

Paul's Bench

Ruminations on philosophy, psychology, life

This is not Yet-Another-Paradox, This is just How-Things-Really-Are...

For all dangerous minds, your own, or ours, but not the tv shows'... ... ... ... ... ... ... How to hack human consciousness, How to defend against human-hackers, and anything in between... ... ... ... ... ...this may be regarded as a sort of dialogue for peace and plenty for a hungry planet, with no one left behind, ever... ... ... ... please note: It may behoove you more to try to prove to yourselves how we may really be a time-traveler, than to try to disprove it... ... ... ... ... ... ...Enjoy!

Creativity✒📃😍✌

“Don’t try to be different. Just be Creative. To be creative is different enough.”

Political Joint

A political blog centralized on current events

zumpoems

Zumwalt Poems Online

dhamma footsteps

postcards from the present moment

chandleur

Bagatelle

OthmanMUT

Observational Tranquillity.