Search Results

Keyword: ‘ai is inside out’

AI is Inside Out

November 18, 2015 3 comments

The subjective world is an arena of sense that is surrounded by an unseen sensor. Unlike a computer, which finds its own data stored in precise and irreducibly knowable bits, we find our own introspection to be confoundingly mysterious. Both the interior and exterior world are presented to us as a natural given to be explored, but the methods of exploration are diametrically opposite. Penetrating the psyche leads to an examination of symbols that are both intensely personal as well as anthropologically universal.

Whether we explore the objective world or the subjective world, we do so from the inside out, as visitors in a universe that matters to us whether we like it or not. To understand how machine intelligence differs from natural consciousness, it is important to see that a machine’s world is taken rather than given. The machine’s world is assembled from the bottom up, through disconnected, instrumental samplings.

It can be argued that our sense of the world is also nothing more than a collection of readings taken by our sense organs, but if that were the case, we should not experience the outside world as a complete environment, but rather as a probabilistic blur that is punctuated by islands of known data. A machine’s view of the outside world should (and would) look like this:

bag

This showed that even when shown millions of photos, the computer couldn’t come up with a perfect Platonic form of an object. For instance, when asked to create a dumbbell, the computer depicted long, stringy arm-things stretching from the dumbbell shapes. Arms were often found in pictures of dumbbells, so the computer thought that sometimes dumbbells had arms.

Similarly, images that have been probabilistically ‘reconstructed’ from fMRI data show the same incoherence:

mqdefault

These are images that have been simulated from the outside in – a mosaic of meaningless elements spread out over a canvas seen by no one. These are not the kinds of visions that we have when we encounter the depths of our own psyche, which are invariably spectacular, if surreal, dreamscapes. By contrast, these early machine models of visual encoding show us a soulless sub-realism made of digital gas; a Bayesian partlessness gliding arbitrarily toward programmed compartments.

Although a machine’s introspection need not have any visual appearance at all, it makes sense that if it did, what would be seen might look something like a debugger interface, full of detailed, unambiguous data about the state of the machine.

debug2

It would be bizarre to have a layer of all-but-incomprehensible fiction in between the machine and its own functions. Even if the dashboard of such a complex machine used a lot of compression techniques, surely that compression would not be a mystery to the machine itself.

The point that I’m trying to get across here is that what we are developing in machines is actually an anti-subjectivity. Its world is fuzzy and delirious on the outside, and clearly circumscribed on the inside – exactly the reverse of our natural awareness. Machine psychology is a matter of compiling the appropriate reports and submitting them for error correction auditing, while machine perception is a tenuous process of probing and guessing in the dark. Our own inner depths seem to defy all machine expectations, containing neither useful reports on the state of our brain nor unnatural chaos. Our view of the world outside of ourselves is not one which seems to be manufactured on the fly but one which imparts a profound, pervasive sense of orientation and clarity.

Edit: 7/23/16, another example: http://www.fastcodesign.com/3062016/this-neural-network-makes-human-faces-from-scratch-and-theyre-terrifying

computerfaces

Edit 12/19/16, see also https://multisenserealism.com/2016/12/19/fooling-computer-image-recognition-is-easier-than-it-should-be/

Edit 5/19/17 https://arstechnica.com/information-technology/2017/05/an-ai-invented-a-bunch-of-new-paint-colors-that-are-hilariously-wrong/

image_recognition

Edit: 6/29/17 – https://wordpress.com/post/multisenserealism.com/5161

artmonstern

7/22/17 – https://blog.keras.io/the-limitations-of-deep-learning.html

“One very real risk with contemporary AI is that of misinterpreting what deep learning models do, and overestimating their abilities. A fundamental feature of the human mind is our “theory of mind”, our tendency to project intentions, beliefs and knowledge on the things around us. Drawing a smiley face on a rock suddenly makes it “happy”—in our minds. Applied to deep learning, this means that when we are able to somewhat successfully train a model to generate captions to describe pictures, for instance, we are led to believe that the model “understands” the contents of the pictures, as well as the captions it generates. We then proceed to be very surprised when any slight departure from the sort of images present in the training data causes the model to start generating completely absurd captions.”

1/6/18 – https://gizmodo.com/this-simple-sticker-can-trick-neural-networks-into-thin-1821735479/amp

banana

5/2/2019 – heatlantic.com/science/archive/2019/05/ai-evolved-these-trippy-images-to-please-a-monkeys-neurons/588517

ageofAI

AI is Still Inside Out

June 29, 2017 1 comment

artmonstern

Turn your doodles into madness.

I think this is a good example of how AI is ‘inside out’. It does not produce top-down perception and sensations in its own frame of awareness, but rather it is a blind seeking of our top-down perception from a completely alien, unconscious perspective.

The result is not like an infant’s consciousness learning about the world from the inside out and becoming more intelligent, rather it is the opposite. The product is artificial noise woven together from the outside by brute force computation until we can almost mistake its chaotic, mindless, emotionless products for our own reflected awareness.

This particular program appears designed to make patterns that look like monsters to us, but that isn’t why I’m saying its an example of AI being inside out. The point is that this program exposes image processing as a blind process of arithmetic simulation rather than any kind of seeing. The result is a graphic simulacra…a copy with no original which, if we’re not careful, can eventually tease us into accepting it as a genuine artifact of machine experience.

See also: https://multisenserealism.com/2015/11/18/ai-is-inside-out/

Time for an update (6/29/22) to further demonstrate the point:
ai_genius

Joscha Bach, Yulia Sandamirskaya: “The Third Age of AI: Understanding Machines that Understand”

September 23, 2022 Leave a comment


Here’s my comments and Extra Annoying Questions on this recent discussion. I like and admire/respect both of them and am not claiming to have competence in the specific domains of AI development they’re speaking on, only in the metaphysical/philosophical domains that underlie them. I don’t even disagree with the merits of each of their views on how to best proceed with AI dev in the near future. What fun would it be to write about what I don’t disagree with though? My disagreements are with the big, big, big picture issues of the relationship of consciousness, information processing, consciousness, and cosmology.

Jumping right in near the beginning…

“The intensity gets associated with brightness and the flatness gets associated with the absence of brightness, with darkness”

Joscha 12:37

First of all, the (neuronal) intensity and flatness *already are functionally just as good as* brightness and darkness. There is no advantage to conjuring non-physical, non-parsimonious, unexplained qualities of visibility to accomplish the exact same thing as was already being accomplished by invisible neuronal properties of ‘intensity’ and ‘flatness’. 

Secondly, where are the initial properties of intensity and flatness coming from? Why take those for granted but not sight? In what scope of perception and aesthetic modality is this particular time span presented as a separate event from the totality of events in the universe? What is qualifying these events of subatomic and atomic positional change, or grouping their separate instances of change together as “intense” or “flat”? Remember, this is invisible, intangible, and unconscious. It is unexperienced. A theoretical neuron prior to any perceptual conditioning that would make it familiar to us as anything resembling a neuron, or an object, or an image.

Third, what is qualifying the qualification of contrast, and why? In a hypothetical ideal neuron before all conscious experience and perception, the mechanisms are already doing what physical forces mechanically and inevitably demand. If there is a switch or gate shaped structure in a cell membrane that opens when ions pile up, that is what is going to happen regardless of whether there is any qualification of the piling of ions as ‘contrasting’ against any subsequent absence of piles of ions. Nothing is watching to see what happens if we don’t assume consciousness. So now we have exposed as unparsimonious and epiphenomenal to physics not only visibility (brightness and darkness) and observed qualities of neuronal activity (intensity and flatness), but also the purely qualitative evaluation of ‘contrast’. Without consciousness, there isn’t anything to cause a coherent contrast that defines the beginning and ending of an event.

  • 13:42 I do like Joscha’s read of the story of Genesis as a myth describing consciousness emerging from a neurological substrate, however I question why the animals he mentions are constructed ‘in the mind’ rather than discovered. Also, why so much focus on sight? What about the other senses? We can feel the heat of the sun – why not make animals out of arrays of warm and cool pixels instead of bright and dark? Why have multiple modes of aesthetic presentation at all? Again – where is the parsimony that we need for a true solution to the hard problem / explanatory gap? If we already have molecules doing what molecules must do in a neuron, which is just move or resist motion, how and why do we suddenly reach for ‘contrast’-ing qualities? If we follow physical parsimony strictly, the brain doesn’t do any ‘constructing’ of brightness, or 3d sky, or animals. The brain is *already* constructing complex molecular shapes that do everything that a physical body could possibly evolve to do – without any sense or experience and just using a simple geometry of invisible, unexperienced forces. What would a quality of ‘control’ be doing in a physical universe of automatic, statistical-mechanical inevitables?

“I suspect that our culture actually knew, at some point, that reality, and the sense of reality and being a mind, is the ability to dream – the ability to be some kind of biological machine that dreams about a world that contains it.”

Joscha 14:28

This is what I find so frustrating to me about about Joscha’s view. It is SO CLOSE to getting the bigger picture but it doesn’t go *far enough*. Why doesn’t he see that the biological machine would also be part of the dream? The universe is not a machine that dreams (how? why? parsimony, hard problem) – it’s a dream that machines sometimes. Or to be more precise (and to advertise my multisense realism views), the universe is THE dream that *partially* divides itself into dreams. I propose that these diffracted dreams lens each other to seem like anti-dreams (concrete physical objects or abstract logical concepts) and like hyper-dreams (spiritual/psychedelic/transpersonal/mytho-poetic experiences), depending on the modalities of sense and sense-making that are available, and whether they are more adhesive to the “Holos” or more cohesive to the “Graphos” end of the universal continuum of sense.

“So what do we learn from intelligence in nature? So first if first if we want to try to build it, we need to start with some substrates. So we need to start with some representations.”

Yulia 16:08

Just noting this statement because in my understanding, a physical substrate would be a presentation rather than a re-presentation. If we are talking about the substrates in nature we are talking about what? Chemistry? Cells made of molecules? Shapes moving around? Right away Yulia’s view is seems to give objects representational abilities. I understand that the hard problem of consciousness is not supposed to be part of the scope of her talk, but I am that guy who demands that at this moment in time, it needs to be part of every talk that relates to AI!

“…and in nature the representations used seem to be not distributed. Neural networks, if you’re familiar with those, multiple units, multi-dimensional vectors represent things in the world…and not just (you know) single symbols.”

Yulia 16:20

How is this power of representation given to “units” or “vectors”, particularly if we are imagining a universe prior to consciousness? Must we assume that parts of the world just do have this power to symbolize, refer to, or seem like other parts of the world in multiple ways? That’s fine, I can set aside consciousness and listen to where she is going with this.

17:16: I like what Yulia brings up about the differences between natural and technological approaches as far as nature (biology really). She says that nature begins with dynamic stability by adaptation to change (homeostasis, yes?) while AI architecture starts with something static and then we introduce change if needed. I think that’s a good point, and relate it to my view that “AI is Inside Out“. I agree and go further to add that not only does nature begin with change and add stasis when needed but nature begins with *everything* that it is while AI begins with *nothing*…or at least it did until we started using enormous training sets of training data from the world.

  • to 18:14: She’s discussing the lag between sensation and higher cognition…the delay that makes prediction useful. This is a very popular notion and it is true as far as it goes. Sure, if we look at the events in the body as a chain reaction in the micro timescale, then there is a sequence going from retina to optical nerve to visual cortex, etc – but – I would argue this is only one of many timescales that we should understand and consider. In other ways, my body’s actions are *behind* my intentions for it. My typing fingers are racing to keep up with the dictation from my inner voice, which is racing to keep up with my failing memory of the ideas that I want to express. There are many agendas that are hovering over and above my moment-to-moment perceptions, only some of which I am personally aware of at any given moment but recognize my control over them in the long term. To look only at the classical scale of time and biology is to fall prey to the fallacy of smallism.
https://plato.stanford.edu/entries/panpsychism/

I can identify at least six modes of causality/time with only two of them being sequential/irreversible.

The denial of other modes of causality becomes a problem if the thing we’re interested in – personal consciousness, does not exist on that timescale or causality mode that we’re assuming is the only one that is real. I don’t think that we exist in our body or brain at all. The brain doesn’t know who we are. We aren’t there, and the brain’s billions of biochemical scale agendas aren’t here. Neither description represents the other, and only the personal scale has the capacity to represent anything. I propose that they are different timescales of the same phenomenon, which is ‘consciousness’, aka nested diffractions of the aesthetic-participatory Holos. One does not cause the other in the same way that these words you see on your screen are not causing concepts to be understood, and the pixels of the screen aren’t causing a perception of them as letters. They coincide temporally, but are related only through a context of conscious perception, not built up from unconscious functions of screens, computers, bodies, or brains.

  • to 25:39 …cool stuff about insect brains, neural circuits etc. 
  • 25:56 talking about population coding, distributed representations. I disagree with the direction that representation is supposed to take here, as far as I think that it is important to at least understand that brain functions cannot *literally* re-present anything. It is actually the image of the brain that is a presentation in our personal awareness that iconically recapitulates some aspects of the subpersonal timescale of awareness that we’re riding on top of. Again, I think we’re riding in parallel, not in series, with the phenomenon that we see as brain activity. I suggest that the brain activity never adds up to a conscious experience. The brain is the physical inflection point of what we do to the body and what the body does to us. Its activity is already a conscious experience in a smaller and larger timescale than our own, that is being used by the back end of another, personal timescale of conscious experience. What we see as the body is, in that timescale of awareness that is subpersonal rather than subconscious, a vast layer of conscious experiences that only look like mechanisms because of the perceptual lensing that diffracts perspective from all of the others. The personal scope of awareness sees the subpersonal scope of awareness as a body/cells/molecules because it’s objectifying the vast distance between that biological/zoological era of conscious experience so that it can coexist with our own. It is, in some sense, our evolutionary past – still living prehistorically. We relate to it as an alien community through microscoping instruments. I say this to point way toward a new idea. I’m not expecting that this would be common knowledge and I don’t consider that cutting edge thinkers like Sandamirskaya and Bach are ‘wrong’ for not thinking of it that way. Yes, I made this view of the universe up – but I think that it does actually work better than the alternatives that I have seen so far.
  • to 34:00 talking about the unity of the brain’s physical hardware with its (presumed) computing algorithms vs the disjunction between AI algorithms and the hardware/architectures we’ve been using. Good stuff, and again aligns with my view of AI being inverted or inside out. Our computers are a bottom-up facade that imitate some symptoms of some intelligence. Natural intelligence is bottom up, top down, center out, periphery in, and everything in between. It is not an imitation or an algorithm but it uses divided conscious experience to imitate and systemize as well as having its own genuine agendas that are much more life affirming and holistic than mere survival or control. Survival and control are annoyances for intelligence. Obstructions to slow down the progress from thin scopes of anesthetized consciousness to richer aesthetics of sophisticated consciousness. Yulia is explaining why neuroscience provides a good example of working AI that we should study and emulate – I agree that we should, but not because I think it will lead to true AGI, just that it will lead to more satisfying prosthetics for our own aesthetic-participatory/experiential enhancement…which is really what we’re trying to do anyhow, rather than conjure a competing inorganic super-species that cannot be killed.

When Joscha resumes after 34:00, he discusses Dall-E and the idea of AI as ‘dreaming’ but at the same time as ‘brute force’ with superhuman training on 800 million images. Here I think the latter is mutually exclusive of the former. Brute force training yes, dreaming and learning, no. Not literally. No more than a coin sorter learns banking. No more than an emoji smiles at us. I know this is tedious but I am compelled to continue to remind the world about the pathetic fallacy. Dall-E doesn’t see anything. It doesn’t need to. It’s not dreaming up images for us. It’s a fancy cash register that we have connected to a hypnotic display of its statistical outputs. Nothing wrong with that – it’s an amazing and mostly welcome addition to our experience and understanding. It is art in a sense, but in another it’s just a Ouija board through which we see recombinations of art that human beings have made for other human beings based on what they can see. If we want to get political about it, it’s a bit of a colonial land grab for intellectual property – but I’m ok with that for the moment.

In the dialogue that follows in the middle of the video, there is some interesting and unintentionally connected discussion about the lack of global understanding of the brain and the lack of interdisciplinary communication within academia between neuroscientists, cognitive scientists, neuromorphic engineers. (philosophers of mind not invited ;( ).
Note to self: get a bit more background on the AI silver bullet of the moment, the stochastic Gradient Descent Algorithm

Bach and Sandamirskaya discuss the benefits and limitations of the neuromorphic, embodied hardware approach vs investing more in building simulations using traditional computing hardware. We are now into the shop talk part of the presentation. I’m more of a spectator here, so it’s interesting but I have nothing to add.

By 57:12 Joscha makes an hypothesis about the failure of AI thus far to develop higher understanding.

“…the current systems are not entangled with the world, but I don’t think it’s because they are not robots, I think it’s because they’re not real time.”

To this I say it’s because ‘they’ are not real. It’s the same reason why the person in the mirror isn’t actually looking back at you. There is no person there. There is an image in our visual awareness. The mirror doesn’t even see it. There is no image for the mirror, it’s just a plane of electromagnetically conditioned metal behind glass that happens to do the same kind of thing that the matter of our eyeballs does, which is just optical physics that need not have any visible presentation at all.

The problem is the assumption that we are our body, or are in our body, or are generated by a brain/body rather than seeing physicality as a representation of consciousness on one timescale that is more fully presented in another that we can’t directly access. When we see an actor in a movie, we are seeing a moving image and hearing sound. I think that the experience of that screen image as a person is made available to us not through processing of those images and sounds but through the common sense that all images and sounds have with the visible and aural aspects of our personal experience. We see a person *through* the image rather than because of it. We see the ‘whole’ through ‘holes’ in our perception.

This is a massive intellectual shift, so I don’t expect anyone to be able to pull it off just by thinking about it for 30 seconds, even if they wanted to. It took several years of deep consideration for me. The hints are all around us though. Perceptual ‘fill-in’ is the rule, not the exception. Intuition. Presentiment. Precognitive dreams, remote viewing, and other psi. NDEs. Blindsight and synesthesia.

When we see each other as an image of a human body we are using our own limited human sight, which is also limited by the animal body>eyes>biology>chemistry>physics. All of that is only the small illuminated subset of consciousness-that-we-are-personally-conscious-of-when-we-are-normatively-awake. It should be clear that is not all that we are. I am not just these words, or the writer of these words, or a brain or a body, or a process using a brain or body, I am a conscious experience in a universe of conscious experiences that are holarchically diffracted (top down, bottom up, center out, etc). My intelligence isn’t an algorithm. My intelligence is a modality of awareness that uses algorithms and anti-algorithms alike. It feasts on understanding like olfactory-gustatory awareness feasts on food.

Even that is not all of who I am, and even “I” am not all of the larger transpersonal experience that I live through and that lives through me. I think that people who are gifted with deep understanding of mathematics and systemizing logic tend to have been conditioned to use that part of the psyche to the exclusion of other modes of sense and sense making, leaving the rich heritage of human understanding of larger psychic contexts to atrophy, or worse, reappear as a projected shadow appearance of ‘woo’ to the defensive ego, still wounded from the injury of centuries under our history of theocratic rule. This is of course very dangerous, and even more dangerous, you need that atrophied part of the psyche to understand why it is dangerous…which is why seeing the hard problem in the first place is too hard for many people, even many philosophers who have been discussing it for decades.

Synchronistically, I now return to the video at 57:54, where Yulia touches on climate change (or more importantly, from our perspective, climate destabilization) and the flawed expectation of mind uploading. I agree with her that it won’t work, although probably for different reasons. It’s not because the substrate matters – it does, but only because the substrate itself is a lensing artifact masking what is actually the totality of conscious experience.

Organic matter and biology are a living history of conscious experience that cannot be transcended without losing the significance and grounding of that history. Just as our body cannot survive by drinking an image of water, higher consciousness cannot flourish in a sandbox of abstract semiotic switches. We flourish *in spite of* the limits of body and brain, not because our experience is being generated by them.

This is not to say that I think organic matter and biology are in any way the limits of consciousness or human consciousness, but rather they are a symptom of the recipe for the development of the rich human qualities of consciousness that we value most. The actual recipe of human consciousness is made of an immense history of conscious experience, wrapped around itself in obscenely complicated ways that might echo the way that protein structures are ordered. This recipe includes seemingly senseless repetition of particular conscious experiences over vast durations of time. I don’t think that this authenticity can be faked. Unlike the patina of an antique chair or the bouquet of a vintage wine that could in theory be replicated artificially, the humanness of human consciousness depends on the actual authenticity of the experience. It actually takes billions of years of just these types of physical > chemical > organic > cellular > somatic > cerebral > anthropological > cultural > historical experiences to build the capacity to appreciate the richness and significance of those layers. Putting a huge data set end product of that chain of experience in the hands of a purely pre-organic electrochemical processor and expecting it to animate into human-like awareness is like trying to train a hydrogen bomb to sing songs around a campfire.

On Sentience and AI

June 15, 2022 1 comment
A comment on this article in the Atlantic: https://www.theatlantic.com/technology/archive/2022/06/google-engineer-sentient-ai-chatbot/661273

Sean Prophet, I am certain that the current generation of software is not sentient and my understanding is that it may in fact be impossible to assemble any sentient device. This is not, as you claim with certitude, based on unsupportable hubris and fear, but on decades of deep contemplation and discussion about the nature of consciousness, information, and matter. My view is unique but informed by the ideas of many, many philosophers, scientists, mystics, and mathematicians throughout human history.

I do not worry about machines replacing humans. I’m not particularly fond of humans en masse, but I recognize that humans are responsible for many of the best and only a few of the worst things about the world that we now live in – including computers.

My journey has gone from seeing the world through the lens of atheistic materialism to psychedelic spiritualism, to Neoplatonic monotheism, to what I call Multisense Realism. I think that reality is ultimately a kind of art gallery that experiences itself – a self-diffracting, cosmopsychic Holos of aesthetic-participatory phenomena in which anesthetic-automatic appearances are rendered as lensing artifacts: Lorentz-like perceptual transforms that make conscious experience on one timescale seem like ‘matter’ or ‘information’ to consciousness on another timescale. We are not ‘data’. We are not information-processing systems or material-energetic bodies. Both of those are appearances within the real world of authentic, and direct (if highly filtered) perception.

It’s my understanding that because machines are assembled from tangible parts and intangible rules, they are not like the bodies of natural objects. They have not evolved inevitably as tangible symptoms of a trans-tangible experiential phenomenon but have been devised and deployed by the ‘inside’ appearance of one type of conscious experience onto the ‘outside’ appearance of another. In our case, our AI efforts are deployed on geochemical substrates by an anthropological-zoological consciousness, using matter as a vehicle to reflect an inverted image of our own most superficial intellectual but most sophisticated dimensions of sense-making.

I know this sounds over the top, and to be honest, I’m not really writing this to be understood by people who are not fluent in the deep currents of philosophy of mind and computation. I’m no longer qualified to talk about this stuff to a general audience. My views pick up where conventional views of this historical moment leave off. You have to have already accepted the hard problem of consciousness and questioned panpsychism to open the door that my worldview is behind.

Anyhow, while we are on diametrically opposite sides of this issue Sean, I know with certainty that it is not for the reasons that you think and project onto (at least some of) us. I have not really run into many fans of human beings who are terrified of losing their specialness. That is a stereotype that I do not find pans out in reality. Instead, I find a dichotomy between a group of highly educated, highly intelligent men on the extreme systemizing end of the systemizing-empathizing (I call cohesive-adhesive) spectrum of consciousness, without much theory of mind skill falling into a trap of their own hubris while a mostly unwitting public with neither the time nor interest to care about the subject – but when forced to, they intuitively know that machines aren’t literally conscious, but can’t explain why.

I think that I have explained why, although it is spread out over thousands of pages of conversations and essays. For anyone who wants to follow that trail of breadcrumbs, here’s a place to start.

https://multisenserealism.com/?s=ai+is+inside+out

Joscha Bach: We need to understand the nature of AI to understand who we are – Part 2

December 17, 2018 1 comment

This is the second part of my comments on Nikola Danaylov’s interview of Joscha Bach: https://www.singularityweblog.com/joscha-bach/

My commentary on the first hour is here. Please watch or listen to the podcast as there is a lot that is omitted and paraphrased in this post. It’s a very fast paced, high-density conversation, and I would recommend listening to the interview in chunks and following along here for my comments if you’re interested.

JB_Part2

1:00:00 – 1:10:00

JB – Conscious attention in a sense is the ability to make indexed memories that I can later recall. I also store the expected result and the triggering condition. When do I expect the result to be visible? Later I have feedback about whether the decision was good or not. I compare result I expected with the result that I got and I can undo the decision that I made back then. I can change the model or reinforce it. I think that this is the primary mode of learning that we use, beyond just associative learning.

JB – 1:01:00 Consciousness means that you will remember what you had attended to. You have this protocol of ‘attention’. The memory of the binding state itself, the memory of being in that binding state where you have this observation that combines as many perceptual features as possible into a single function. The memory of that is phenomenal experience. The act of recalling this from the protocol is Access Consciousness. You need to train the attentional system so it knows where you store your backend cognitive architecture. This is recursive access to the attentional protocol, you remember when you make the recall. You don’t do this all the time, only when you want to train this. This is reflexive consciousness. It’s the memory of the access.

CW – By that definition, I would ask if consciousness couldn’t exist just as well without any phenomenal qualities at all. It is easy to justify consciousness as a function after the fact, but I think that this seduces us into thinking that something impossible can become possible just because it could provide some functionality. To say that phenomenal experience is a memory of a function that combines perceptual features is to presume that there would be some way for a computer program to access its RAM as perceptual features rather than as the (invisible, unperceived) states of the RAM hardware itself.

JB – Then there is another thing, the self. The self is a model of what it would be like to be a person. The brain is not a person. The brain cannot feel anything, it’s a physical system. Neurons cannot feel anything, they’re just little molecular machines with a Turing machine inside of them. They cannot even approximate arbitrary function, except by evolution, which takes a very long time. What do we do if you are a brain that figures out that it would be very useful to know what it is like to be a person? It makes one. It makes a simulation of a person, a simulacrum to be more clear. A simulation basically is isomorphic in the behavior of a person, and that thing is pretending to be a person, it’s a story about a person. You and me are persons, we are selves. We are stories in a movie that the brain is creating. We are characters in that movie. The movie is a complete simulation, a VR that is running in the neocortex.

You and me are characters in this VR. In that character, the brain writes our experiences, so we *feel* what it’s like to be exposed to the reward function. We feel what it’s like to be in our universe. We don’t feel that we are a story because that is not very useful knowledge to have. Some people figure it out and they depersonalize. They start identifying with the mind itself or lose all identification. That doesn’t seem to be a useful condition. The brain is normally set up so that the self thinks that its real, and gets access to the language center, and we can talk to each other, and here we are. The self is the thing that thinks that it remembers the contents of its attention. This is why we are conscious. Some people think that a simulation cannot be conscious, only a physical system can, but they’ve got it completely backwards. A physical system cannot be conscious, only a simulation can be conscious. Consciousness is a simulated property of a simulated self.

CW – To say “The self is a model of what it would be like to be a person” seems to be circular reasoning. The self is already what it is like to be a person. If it were a model, then it would be a model of what it’s like to be a computer program with recursively binding (binding) states. Then the question becomes, why would such a model have any “what it’s like to be” properties at all? Until we can explain exactly how and why a phenomenal property is an improvement over the absence of a phenomenal property for a machine, there’s a big problem with assuming the role of consciousness or self as ‘model’ for unconscious mechanisms and conditions. Biological machines don’t need to model, they just need to behave in the ways that tend toward survival and reproduction.

(JB) “The brain is not a person. The brain cannot feel anything, it’s a physical system. Neurons cannot feel anything, they’re just little molecular machines with a Turing machine inside of them”.

CW – I agree with this, to the extent that I agree that if there were any such thing as *purely* physical structures, they would not feel anything, and they would just be tangible geometric objects in public space. I think that rather than physical activity somehow leading to emergent non-physical ‘feelings’ it makes more sense to me that physics is made of “feelings” which are so distant and different from our own that they are rendered tangible geometric objects. It could be that physical structures appear in these limited modes of touch perception rather than in their own native spectrum of experience because that are much slower/faster and older than our own.

To say that neurons or brains feel would be, in my view, a category error since feeling is not something that a shape can logically do, just by Occam’s Razor, and if we are being literal, neurons and brains are nothing but three-dimensional shapes. The only powers that a shape could logically have are geometric powers. We know from analyzing our dreams that a feeling can be symbolized as a seemingly solid object or a place, but a purely geometric cell or organ would have no way to access symbols unless consciousness and symbols are assumed in the first place.

If a brain has the power to symbolize things, then we shouldn’t call it physical. The brain does a lot of physical things but if we can’t look into the tissue of the brain and see some physical site of translation from organic chemistry into something else, then we should not assume that such a transduction is physical. The same goes for computation. If we don’t find a logical function that changes algorithms into phenomenal presentations then we should not assume that such a transduction is computational.

(JB) “What do we do if you are a brain that figures out that it would be very useful to know what it is like to be a person? It makes one. It makes a simulation of a person, a simulacrum to be more clear.”

CW – Here also the reasoning seems circular. Useful to know what? “What it is like” doesn’t have to mean anything to a machine or program. To me this is like saying that a self-driving car would find it useful to create a dashboard and pretend that it is driven by a person using that dashboard rather than being driven directly by the algorithms that would be used to produce the dashboard.

(JB) “A simulation basically is isomorphic in the behavior of a person, and that thing is pretending to be a person, it’s a story about a person. You and me are persons, we are selves. We are stories in a movie that the brain is creating.”

CW – I have thought of it that way, but now I think that it makes more sense if we see both the brain and the person as parts of a movie that is branching off from a larger movie. I propose that timescale differentiation is the primary mechanism of this branching, although timescale differentiation is only one sort of perceptual lensing that allows experiences to include and exclude each other.

I think that we might be experiential fragments of an eternal experience, and a brain is a kind of icon that represents part of the story of that fragmentation. The brain is a process made of other processes, which are all experiences that have been perceptually lensed by the senses of touch and sight to appear as tangible and visible shapes.

The brain has no mechanical reason to make movies, it just has to control the behavior of a body in such a way that repeats behaviors which have happened to coincide with bodies surviving and reproducing. I can think of some good reasons why a universe which is an eternal experience would want to dream up bodies and brains, but once I plug up all of the philosophical leaks of circular reasoning and begging the question, I can think of no plausible reason why an unconscious body or brain would or could dream.

All of the reasons that I have ever heard arise as post hoc justifications that betray an unscientific bias toward mechanism. In a way, the idea of mechanism as omnipotent is even more bizarre than the idea of an omnipotent deity, since the whole point of a mechanistic view of nature is to replace undefined omnipotence with robustly defined, rationally explained parts and powers. If we are just going to say that emergent phenomenal magic happens once the number of shapes or data relations is so large that we don’t want to deny any power to it, we are really just reinventing religious faith in an inverted form. It is to say that sufficiently complex computations transcend computation for reasons that transcend computation.

(JB) “The movie is a complete simulation, a VR that is running in the neocortex.”

CW – We have the experience of playing computer games using a video screen, so we conflate a computer program with a video screen’s ability to render visible shapes. In fact, it is our perceptual relationship with a video screen that doing the most critical part of the simulating. The computer by itself, without any device that can produce visible color and contrast, would not fool anyone. There’s no parsimonious or plausible way to justify giving the physical states of a computing machine aesthetic qualities unless we are expecting aesthetic qualities from the start. In that case, there is no honest way to call them mere computers.

(JB) “In that character, the brain writes our experiences, so we *feel* what it’s like to be exposed to the reward function. We feel what it’s like to be in our universe.”

Computer programs don’t need desires or rewards though. Programs are simply executed by physical force. Algorithms don’t need to serve a purpose, nor do they need to be enticed to serve a purpose. There’s no plausible, parsimonious reason for the brain to write its predictive algorithms or meta-algorithms as anything like a ‘feeling’ or sensation. All that is needed for a brain is to store some algorithmically compressed copy of its own brain state history. It wouldn’t need to “feel” or feel “what it’s like”, or feel what it’s like to “be in a universe”. These are all concepts that we’re smuggling in, post hoc, from our personal experience of feeling what it’s like to be in a universe.

(JB)” We don’t feel that we are a story because that is not very useful knowledge to have. Some people figure it out and they depersonalize. They start identifying with the mind itself or lose all identification.”

It’s easy to say that it’s not very useful knowledge if it doesn’t fit our theory, but we need to test for that bias scientifically. It might just be that people depersonalize or have negative results to the idea that they don’t really exist because it is false, and false in a way that is profoundly important. We may be as real as anything ever could be, and there may be no ‘simulation’ except via the power of imagination to make believe.

(JB) “The self is the thing that thinks that it remembers the contents of its attention. This is why we are conscious.”

CW – I don’t see a logical need for that. Attention need not logically facilitate any phenomenal properties. Attention can just as easily be purely behavioral, as can ‘memory’, or ‘models’. A mechanism can be triggered by groups of mechanisms acting simultaneously without any kind of semantic link defining one mechanism as a model for something else. Think of it this way: What if we wanted to build an AI without ANY phenomenal experience? We could build a social chameleon machine, a sociopath with no model of self at all, but instead a set of reflex behaviors that mimic those of others which are deemed to be useful for a given social transaction.

(JB) “A physical system cannot be conscious, only a simulation can be conscious.”

CW – I agree this is an improvement over the idea that physical systems are conscious. What would it mean for a ‘simulation’ to exist in the absence of consciousness though? A simulation implies some conscious audience which participates in believing or suspending disbelief in the reality of what is being presented. How would it be possible for a program to simulate part of itself as something other than another (invisible, unconscious) program?

(JB) “Consciousness is a simulated property of a simulated self.”

I turn that around 180 degrees. Consciousness is the sole absolutely authentic property. It is the base level sanity and sense that is required for all sense-making to function on top of. The self is the ‘skin in the game’ – the amplification of consciousness via the almost-absolutely realistic presentation of mortality.

KD – So in a way, Daniel Dennett is correct?

JB – Yes,[…] but the problem is that the things that he says are not wrong, but they are also not non-obvious. It’s valuable because there are no good or bad ideas. It’s a good idea if you comprehend it and it elevates your current understanding. In a way, ideas come in tiers. The value of an idea for the audience is if it’s a half tier above the audience. You and me have an illusion that we find objectively good ideas, because we work at the edge of our own understanding, but we cannot really appreciate ideas that are a couple of tiers above our own ideas. One tier is a new audience, two tiers means that we don’t understand the relevance of these ideas because we don’t have the ideas that we need to appreciate the new ideas. An idea appears to be great to us when we can stand right in its foothills and look at it. It doesn’t look great anymore when we stand on the peak of another idea and look down and realize the previous idea was just the foothills to that idea.

KD – Discusses the problems with the commercialization of academia and the negative effects it has on philosophy.

JB – Most of us never learn what it really means to understand, largely because our teachers don’t. There are two types of learning. One is you generalize over past examples, and we call that stereotyping if we’re in a bad mood. The other tells us how to generalize, and this is indoctrination. The problem with indoctrination is that it might break the chain of trust. If someone doesn’t check the epistemology of the people that came before them, and take their word as authority, that’s a big difficulty.

CW – I like the ideas of tiers because it confirms my suspicion that my ideas are two or three tiers above everyone else’s. That’s why y’all don’t get my stuff…I’m too far ahead of where you’re coming from. 🙂

1:07:00 Discussion about Ray Kurzweil, the difficulty in predicting timeline for AI, confidence, evidence, outdated claims and beliefs etc.

1:19        JB – The first stage of AI: Finding things that require intelligence to do, like playing chess and then implementing it as an algorithm. Manually engineering strategies for being intelligent in different domains. Didn’t scale up to General Intelligence

We’re now in the second phase of AI, building algorithms to discover algorithms. We build learning systems that approximate functions. He thinks deep learning should be called compositional function approximation. Using networks of many functions instead of tuning single regressions.

There could be a third phase of AI where we build meta-learning algorithms. Maybe our brains are meta-learning machines, not just learning stuff but learning ways of discovering how to learn stuff (for a new domain). At some point there will be no more phases and science will effectively end because there will be a general theory for global optimization with finite resources and all science will use that algorithm.

CW – I think that the more experience we gain with AI, the more we will see that it is limited in ways that we have not anticipated, and also that it is powerful in ways that we have not anticipated. I think that we will learn that intelligence as we know it cannot be simulated, however, in trying to simulate it, we will have developed something powerful, new, and interesting in its impersonal orthogonality to personal consciousness. The revolution may not be about the rise of computers becoming like people but of a rise in appreciation for the quality and richness of personal conscious experience in contrast to the impersonal services and simulations that AI delivers.

1:23        KD – Where does ethics fit, or does it?

JB – Ethics is often misunderstood. It’s not about being good or emulating a good person. Ethics emerges when you conceptualize the world as different agents, and yourself as one of them, and you share purposes with the other agents but you have conflicts of interest. If you think that you don’t share purposes with the other agents, if you’re just a lone wolf, and the others are your prey, there’s no reason for ethics – you only look for the consequences of your actions for yourself with respect for your own reward functions. It’s not ethics though – not a shared system of negotiation because only you matter, because you don’t share a purpose with the others.

KD – It’s not shared but it’s your personal ethical framework, isn’t it?

JB – It has to be personal. I decided not to eat meat because I felt that I shared a purpose with animal; the avoidance of suffering. I also realized that it is not mutual. Cows don’t care about my suffering. They don’t think about it a lot. I had to think about the suffering of cows so I decided to stop eating meat. That was an ethical decision. It’s a decision about how to resolve conflicts of interest under conditions of shared purpose. I think this is what ethics is about. It’s a rational process in which you negotiate with yourself and with others, the resolution of conflicts of interest under contexts of shared purpose. I can make decisions about what purposes we share. Some of them are sustainable and others are not – they lead to different outcomes. In a sense, ethics requires that you conceptualize yourself as something above the organism; that you identify with the systems of meanings above yourself so that you can share a purpose. Love is the discovery of shared purpose. There needs to be somebody you can love that you can be ethical with. At some level you need to love them. You need to share a purpose with them. Then you negotiate, you don’t want them all to fail in all regards, and yourself. This is what ethics is about. It’s computational too. Machines can be ethical if they share a purpose with us.

KD – Other considerations: Perhaps ethics can be a framework within which two entities that do not share interests can negotiate in and peacefully coexist, while still not sharing interests.

JB – Not interests but purposes. If you don’t share purposes then you are defecting against your own interests when you don’t act on your own interest. It doesn’t have integrity. You don’t share a purpose with your food, other than that you want it to be nice and edible. You don’t fall in love with your food, it doesn’t end well.

CW – I see this as a kind of game-theoretic view of ethics…which I think is itself (unintentionally) unethical  I think it is true as far as it goes, but it makes assumptions about reality that are ultimately inaccurate as they begin by defining reality in the terms of a game. I think this automatically elevates the intellectual function and its objectivizing/controlling agendas at the expense of the aesthetic/empathetic priorities. What if reality is not a game? What if the goal is not to win by being a winner but to improve the quality of experience for everyone and to discover and create new ways of doing that?

Going back to JB’s initial comment that ethics are not about being good or emulating a good person, I’m not sure about that. I suspect that many people, especially children will be ethically shaped by encounters with someone, perhaps in the family or a character in a movie who appeals to them and who inspires imitation. Whether their appeal is as a saint or a sinner, something about their style, the way they communicate or demonstrate courage may align the personal consciousness with transpersonal ‘systems of meanings above’ themselves. It could be a negative example which someone encounters also. Someone that you hate who inspires you to embody the diametrically opposite aesthetics and ideals.

I don’t think that machines can be ethical or unethical, not because I think humans are special or better than machines, but out of simple parsimony. Machines don’t need ethics. They perform tasks, not for their own purposes, or for any purpose, but because we have used natural forces and properties to perform actions that satisfy our purposes. Try as we might (and I’m not even sure why we would want to try), I do not think that we will succeed in changing matter or computation into something which both can be controlled by us and which can generate its own purposes. I could be wrong, but I think this is a better reason to be skeptical of AI than any reason that computation gives us to be skeptical of consciousness. It also seems to me that the aesthetic power of a special person who exemplifies a particular set of ethics can be taken to be a symptom of a larger, absolute aesthetic power in divinity or in something like absolute truth. This doesn’t seem to fit the model of ethics as a game-theoretic strategy.

JB – Discussion about eating meat, offers example pro-argument that it could be said that a pasture raised cow could have a net positive life experience since they would not exist but for being raised as food. Their lives are good for them except for the last day, which is horrible, but usually horrible for everyone. Should we change ourselves or change cattle to make the situation more bearable? We don’t want to look at it because it is un-aesthetic. Ethics in a way is difficult.

KD – That’s the key point of ethics. It requires sometimes we make choices that are not in our own best interests perhaps.

JB – Depends what we define ourself. We could say that self is identical to the well being of the organism, but this is a very short-sighted perspective. I don’t actually identify all the way with my organism. There are other things – I identify with society, my kids, my relationships, my friends, their well being. I am all the things that I identify with and want to regulate in a particular way. My children are objectively more important than me. If I have to make a choice whether my kids survive or myself, my kids should survive. This is as it should be if nature has wired me up correctly. You can change the wiring, but this is also the weird thing about ethics. Ethics becomes very tricky to discuss once the reward function becomes mutable. When you are able to change what is important to you, what you care about, how do you define ethics?

CW – And yet, the reward function is mutable in many ways. Our experience in growing up seems to be marked by a changing appreciation for different kinds of things, even in deriving reward from controlling one’s own appetite for reward. The only constant that I see is in phenomenal experience itself. No matter how hedonistic or ascetic, how eternalist or existential, reward is defined by an expectation for a desired experience. If there is no experience that is promised, then there is no function for the concept of reward. Even in acts of self-sacrifice, we imagine that our action is justified by some improved experience for those who will survive after us.

KD – I think you can call it a code of conduct or a set of principles and rules that guide my behavior to accomplish certain kinds of outcomes.

JB – There are no beliefs without priors. What are the priors that you base your code of conduct on?

KD – The priors or axioms are things like diminishing suffering or taking an outside/universal view. When it comes to (me not eating meat), I take a view that is hopefully outside of me and the cows. I’m able to look at the suffering of eating a cow and their suffering of being eaten. If my prior is ‘minimize suffering’, because my test criteria of a sentient being is ‘can it suffer?’ , then minimizing suffering must be my guiding principle in how I relate to another entity. Basically, everything builds up from there.

JB – The most important part of becoming an adult is taking charge of your own emotions – realize that your emotions are generated by your own brain/organism, and that they are here to serve you. You’re not here to serve your emotions. They are here to help you do the things that you consider to be the right things. That means that you need to be able to control them, to have integrity. If you are just a victim of your emotions, and not do the things that you know are the right things, you don’t have integrity. What is suffering? Pain is the result of some part of your brain sending a teaching signal to another part of your brain to improve its performance. If the regulation is not correct, because you cannot actually regulate that particular thing, the pain signal will usually endure and increase until your brain figures it out and turns off the brain signaling center, because it’s not helping. In a sense suffering is a lack of integrity. The difficulty is only that many beings cannot get to the degree of integrity that they can control the application of learning signals in their brain…control the way that their reward function is computed and distributed.

CW – My criticism is the same as in the other examples. There’s no logical need for a program or machine to invent ‘pain’ or any other signal to train or teach. If there is a program to run an animal’s body, the program need only execute those functions which meet the criteria of the program. There’s no way for a machine to be punished or rewarded because there’s no reason for it to care about what it is doing. If anything, caring would impede optimal function. If a brain doesn’t need to feel to learn, then why would a brain’s simulation need to feel to learn?

KD – According to your view, suffering is a simulation or part of a simulation.

JB – Everything that we experience is a simulation. We are a simulation. To us it feels real. There is no getting around this. I have learned in my life that all of my suffering is a result of not being awake. Once I wake up, I realize what’s going on. I realize that I am a mind. The relevance of the signals that I perceive is completely up to the mind. The universe does not give me objectively good or bad things. The universe gives me a bunch of electrical impulses that manifest in my thalamus, and my brain makes sense of them by creating a simulated world. The valence in that simulated world is completely internal – it’s completely part of that world, it’s not objective…and I can control this.

KD – So you are saying suffering is subjective?
JB – Suffering is real to the self with respect to ethics, but it is not immutable. You can change the definition of your self, the things that you identify with. We don’t have to suffer about things, political situations for example, if we recognize them to be mechanical processes that happen regardless of how we feel about them.

CW – The problem with the idea of simulation is that we are picking and choosing which features of our experience are more isomorphic to what we assume is an unsimulated reality. Such an assumption is invariably a product of our biases. If we say that the world we experience is a simulation running on a brain, why not also say that the brain is also a simulation running on something else? Why not say that our experiences of success with manipulating our own experience of suffering is as much of a simulation as the original suffering was? At some point, something has to genuinely sense something. We should not assume that just because our perception can be manipulated we have used manipulation to escape from perception. We may perceive that we have escaped one level of perception, or objectified it, but this too must be presumed to be part of the simulation as well. Perception can only seem to have been escaped in another perception. The primacy of experience is always conserved.

I think that it is the intellect that is over-valuing the significance of ‘real’ because of its role in protecting the ego and the physical body from harm, but outside of this evolutionary warping, there is no reason to suspect that the universe distinguishes in an absolute sense between ‘real’ and ‘unreal’. There are presentations – sights, sounds, thoughts, feelings, objects, concepts, etc, but the realism of those presentations can only be made of the same types of perceptions. We see this in dreams, with false awakenings etc. Our dream has no problem with spontaneously confabulating experiences of waking up into ‘reality’. This is not to discount the authenticity of waking up in ‘actual reality’, only to say that if we can tell that it authentic, then it necessarily means that our experience is not detached from reality completely and is not meaningfully described as a simulation. There are some recent studies that suggest that our perception may be much closer to ‘reality’ than we thought, i.e. that we can train ourselves to perceive quantum level changes.

If that holds up, we need to re-think the idea that it would make sense for a bio-computer to model or simulate a phenomenal reality that is so isomorphic and redundant to the unperceived reality. There’s not much point in a 1 to 1 scale model. Why not just put the visible photons inside the visual cortex in exactly the field that we see? I think that something else is going on. There may not be a simulation, only a perceptual lensing between many different concurrent layers of experience – not a dualism or dual-aspect monism, but a variable aspect monism. We happen to be a very, very complex experience which includes the capacity to perceive aspects of its own perception in an indirect or involuted rendering.

KD – Stoic philosophy says that we suffer not from events or things that happen in our lives, but from the stories that we attach to them. If you change the story, you can change the way you feel about them and reduce suffering. Let go of things we can’t really control, body, health, etc. The only thing you can completely control is your thoughts. That’s where your freedom and power come to be. In that mind, in that simulation, you’re the God.

JB – This ability to make your thoughts more truthful, this is Western enlightenment in a way is aufklärung in German. There is also this other sense of enlightenment, erleuchtung that you have in a spiritual context. So aufklärung fixes your rationality and erleuchtung fixes your motivation. It fixes what’s relevant to you and your relationship between self and the universe.  Often they are seen as mutually exclusive, in the sense thataufklärung leads to nihilism, because you don’t give up your need for meaning, you just prove that it cannot be satisfied. God does not exist in any way that can set you free. In this other sense, you give up your understanding of how the world actually works so that you can be happy. You go down to a state where all people share the same cosmic consciousness, which is complete bullshit, right? But it’s something that removes the illusion of separation and the suffering that comes with the separation. It’s unsustainable.

CW – This duality of aufklärung and erleuchtung I see as another expression of the polarity of the universal continuum of consciousness. Consciousness vs machine, East vs West, Wisdom vs Intelligence. I see both extremes as having pathological tendencies. The Western extreme is cynical, nihilistic, and rigid. The Eastern extreme is naïve, impractical, and delusional. Cosmic consciousness or God does not have to be complete bullshit, but it can be a hint of ways to align ourselves and bring about more positive future experiences, both personally and or transpersonally.

Basically, I think that both the brain and the dreamer of the brain are themselves part of a larger dream that may or may not be like a dreamer. It may be that these possibilities are in participatory superposition, like an ambiguous image, so that what we choose to invest our attention in can actually bias experienced outcomes toward a teleological or non-teleological absolute. Maybe our efforts to could result in the opposite effect also, or some combination of the two. If the universe consists of dreams and dreamed dreamers, then it is possible for our personal experience to include a destiny where we believe one thing about the final dream and find out we were wrong, or right, or wrong then right then wrong again, etc. forever.

KD – Where does that leave us with respect to ethics though? Did you dismantle my ethics, the suffering test?

JB – Yeah, it’s not good. The ethic of eliminating suffering leads us to eliminating all life eventually. Anti-natalism – stop bringing organisms into the world to eliminate suffering, end the lives of those organisms that are already here as painlessly as possible, is this what you want?

KD – (No) So what’s your ethics?

JB – Existence is basically neutral. Why are there so few stoics around? It seems so obvious – only worry about things to the extent that worrying helps you change them…so why is almost nobody a Stoic?

KD – There are some Stoics and they are very inspirational.

JB – I suspect that Stoicism is maladaptive. Most cats I have known are Stoics. If you leave them alone, they’re fine. Their baseline state is ok, they are ok with themselves and their place in the universe, and they just stay in that place. If they are hungry or want to play, they will do the minimum that they have to do to get back into their equilibrium. Human beings are different. When they get up in the morning they’re not completely fine. They need to be busy during the day, but in the evening they feel fine. In the evening they have done enough to make peace with their existence again. They can have a beer and be with their friends and everything is good. Then there are some individuals which have so much discontent within themselves that they can’t take care of it in a single day. From an evolutionary perspective, you can see how this would be adaptive for a group oriented species. Cats are not group oriented. For them, it’s rational to be a Stoic. If you are a group animal, it makes sense for individuals to overextend themselves for the good of the group – to generate a surplus of resources for the group.

CW – I don’t know if we can generalize about humans that way. Some people are more like cats. I will say that I think it is possible to become attached to non-attachment. The stoic may learn to disassociate from the suffering of life, but this too can become a crutch or ‘spiritual bypass’.

 KD – But evolution also diversifies things. Evolution hedges its bets by creating diversity, so some individuals will be more adaptive to some situations than others.

JB – That may not be true. In larger habitats we don’t find more species in them. Competition is more fierce. We reduce the number of species dramatically. We are probably eventually going to look like a meteor as far as obliterating species on this planet.

KD – So what does that mean for ethics in technology? What’s the solution? Is there room for ethics in technology?

JB – Of course. It’s about discovering the long game. You have to look at the long term influences and you also have to question why you think it’s the right thing to do, what the results of that are, which gets tricky.

CW – I think that all that we can do is to experiment and be open to the possibilities that our experiments themselves may be right or wrong. There may be no way of letting ourselves off the hook here. We have to play the game as players with skin in the game, not as safe observers studying only those rules that we have invested in already.

KD – We can agree on that, but how do you define ethics yourself?

JB – There are some people in AI who think that ethics are a way for politically savvy people to get power over STEM people…and with considerable success. It’s largely a protection racket. Ethical studies are relatable and so make a big splash, but it would rarely happen that a self-driving car would have to make those decisions. My best answer of how I define ethics myself is that it is the principled negotiation of conflicts of interest under conditions of shared purpose. When I look at other people, I mostly imagine myself as being them in a different timeline. Everyone is in a way me on a different timeline, but in order to understand them I need to flip a number of bits. These bits are the conditions of negotiation that I have with you.

KD – Where to cows fit in? We don’t have a shared purpose with them. Can you have shared purpose with respect to the cows then?

JB – The shared purpose doesn’t objectively exist. You basically project a shared meaning above the level of the ego. The ego is the function that integrates expected rewards over the next fifty years.

KD – That’s what Peter Singer calls the Universe point of view, perhaps.

JB – If you can go to this Eternalist perspective where you integrate expected reward from here to infinity, most of that being outside of the universe, this leads to very weird things. Most of my friends are Eternalists. All these Romantic Russian Jews, they are like that, in a way. This Eastern European shape of the soul. It creates something like a conspiracy, it creates a tribe, and its very useful for corporations. Shared meaning is a very important thing for a corporation that is not transactional. But there is a certain kind of illusion in it. To me, meaning is like the Ring of Mordor.  If you drop the ring, you will lose the brotherhood of the ring and you will lose your mission. You have to carry it, but very lightly. If you put it on, you will get super powers but you get corrupted because there is no meaning. You get drawn into a cult that you create…and I don’t want to do that…because it’s going to shackle my mind in ways that I don’t want it to be bound.

CW – I agree it is important not to get drawn into a cult that we create, however, what I have found is that the drive to negate superstition tends toward its own cult of ‘substitution’. Rather than the universe being a divine conspiracy, the physical universe is completely innocent of any deception, except somehow for our conscious experience, which is completely deceptive, even to the point of pretending to exist. How can there be a thing which is so unreal that it is not even a thing, and yet come from a universe that is completely real and only does real things?

 KD – I really like that way of seeing but I’m trying to extrapolate from your definition of ethics a guide of how we can treat the cows and hopefully how the AIs can treat us.

JB – I think that some people have this idea that is similar to Asimov, that at some point the Roombas will become larger and more powerful so that we can make them washing machines, or let them do our shopping, or nursing…that we will still enslave them but negotiate conditions of co-existence. I think that what is going to happen instead is that corporations, which are already intelligent agents that just happen to borrow human intelligence, automate their decision making. At the moment, a human being can often outsmart a corporation, because the corporation has so much time in between updating its Excel spreadsheets and the next weekly meetings. Imagine it automates and weekly meetings take place every millisecond, and the thing becomes sentient and understands its role in the world, and the nature of physics and everything else. We will not be able to outsmart that anymore, and well will not live next to it, we will live inside of it. AI will come from top down on us. We will be its gut flora. The question is how we can negotiate that it doesn’t get the idea to use antibiotics, because we’re actually not good for anything.

KD – Exactly. And why wouldn’t they do that?

JB – I don’t see why.

CW – The other possibility is that AI will not develop its own agendas or true intelligence. That doesn’t mean our AI won’t be dangerous, I just suspect that the danger will come from our misinterpreting the authority of a simulated intelligence rather than from a genuine mechanical sentience.

KD – Is there an ethics that could guide them to treat us just like you decided to treat the cows when you decided not to eat meat?

JB – Probably no way to guarantee all AIs would treat us kindly. If we used the axiom of reducing suffering to build an AI that will be around for 10,000 years and keep us around too, it will probably kill 90% of the people painlessly and breed the rest into some kind of harmless yeast. This is not what you want, even though it would be consistent with your stated axioms. It would also open a Pandora’s Box to wake up as many people as possible so that they will be able to learn how to stop their suffering.

KD – Wrapping up

JB – Discusses book he’s writing about how AI has discovered ways of understanding the self and consciousness which we did not have 100 years ago. The nature of meaning, how we actually work, etc. The field of AI is largely misunderstood. It is different from the hype, largely is in a way, statistics on steroids. It’s identifying new functions to model reality. It’s largely experimental and has not gotten to the state where it can offer proofs of optimality.  It can do things in ways that are much better than the established rules of statisticians. There is also going to be a convergence between econometrics, causal dependency analysis, and AI, and statistics.  It’s all going to be the same in a particular way, because there’s only so many ways that you can make mathematics about reality. We confuse this with the idea of what a mind is. They’re closely related. I think that our brain contains an AI that is making a model of reality and a model of a person in reality, and this particular solution of what a particular AI can do in the modeling space is what we are. So in a way we need to understand the nature of AI, which I think is the nature of sufficiently general function approximation, maybe all the truth that can be found by an embedded observer, in particular kinds of universes that have the power to create it. This could be the question of what AI is about, how modeling works in general. For us the relevance of AI is how does it explain who we are. I don’t think there is anything else that can.

CW – I agree that AI development is the next necessary step to understanding ourselves, but I think that we will be surprised to find that General Intelligence cannot be simulated and that this will lead us to ask the deeper questions about authenticity and irreducibly aesthetic properties.

KD – So by creating AI, we can perhaps understand the AI that is already in our brain.

JB – We already do. Minsky and many others who have contributed to this field are already better ideas than anything that we had 200 years ago. We could only develop many of these ideas because we began to understand the nature of modeling – the status of reality.

The nature of our relationship to the outside world. We started out with this dualistic intuition in our culture, that there is a thinking substance (Res Cogitans) and an extended substance (Res Extensa)…stuff in space universe and a universe of ideas. We now realize that they both exist, but they both exist within the mind. We understand that everything perceptual gets mapped to a region in three space, but we also understand that physics is not a three space, it’s something else entirely. The three space exists only as a potential of electromagnetic interactions at a certain order of magnitude above the Planck length where we are entangled with the universe. This is what we model, and this looks three dimensional to us.

CW – I am sympathetic to this view, however, I suggest an entirely different possibility. Rather than invoking a dualism of existing in the universe and existing ‘in the mind’, I see that existence itself is an irreducibly perceptual-participatory phenomenon. Our sense of dualism may actually reveal more insights into our deeper reality than those insights which assume that tangible objects and information exist beyond all perception. The more we understand about things like quantum contextuality and relativity, I think the more we have to let go of the compulsion to label things that are inconvenient to explain as illusions. I see Res Cogitans and Res Extensa as opposite poles of a Res Aesthetica continuum which is absolute and eternal. It is through the modulation of aesthetic lensing that the continuum is diffracted into various modalities of sense experience. The cogitans of software and the extensa of hardware can never meet except through the mid-range spectrum of perception. It is from that fertile center, I suspect, that most of the novelty and richness of the universe is generated, not from sterile algorithms or game-theoretic statistics on the continuum’s lensed peripheries.

Everything else we come up with that cannot be mapped to three space is Res Cogitans. If we transfer this dualism into a single mind then we have the idealistic monism that we have in various spiritual teachings – this idea that there is no physical reality, that we live in a dream. We are characters dreamed by a mind on a higher plane of existence and that’s why miracles are possible. Then there is this Western perspective of a mechanical universe. It’s entirely mechanical, there’s no conspiracy going on. Now we understand that these things are not in opposition, they’re complements. We actually do live in a dream but the dream is generated by our neocortex. Our brain is not a machine that can give us access to reality as it is, because that’s not possible for a system that is only measuring a few bits at a systemic interface. There are no colors and sounds on Earth. We already know that.

CW – Why stop at colors and sounds though? How can we arbitrarily say that there is an Earth or a brain when we know that it is only a world simulated by some kind of code. If we unravel ourselves into evolution, why not keep going and unravel evolution as well? Maybe colors and sounds are a more insightful and true reflection of what nature is made of than the blind measurements that we take second hand through physical instruments? It seems clear to me that this is a bias which has not yet properly appreciated the hints of relativity and quantum contextuality. If we say that physics has no frame of reference, then we have to understand that we may be making up an artificial frame of reference that seems to us like no frame of reference. If we live in a dream, then so does the neocortex. Maybe they are different dreams, but there is no sound scientific reason to privilege every dream in the universe except our own as real.

The sounds and colors are generated as a dream inside your brain. The same circuits that make dreams during the night make dreams during the day. This is in a way our inner reality that’s being created on a brain. The mind on a higher plane of existence exists, it’s a brain of a primate that’s made of cells and lives in a mechanical physical universe. Magic is possible because you can edit your memories. You can make that simulation anything that you want it to be. Many of these changes are not sustainable, which is why the sages warn against using magic(k), because if down the line, if you change your reward function, bad things may happen. You cannot break the bank.

KD – To simplify all of this, we need to understand the nature of AI to understand ourselves.

JB – Yeah, well, I would say that AI is the field that took up the slack after psychology failed as a science. Psychology got terrified of overfitting, so it stopped making theories of the mind as a whole, it restricted itself to theories with very few free parameters so it could test them. Even those didn’t replicate, as we know now. After Piaget, psychology largely didn’t go anywhere, in my perspective. It might be too harsh because I see it from the outside, and outsiders of AI might argue that AI didn’t go very far, and as an insider I’m more partial here.

CW – It seems to me that psychology ran up against a barrier that is analogous to Gödel’s incompleteness. To go on trying to objectify subjectivity necessarily brings into question the tools of formalism themselves. I think that it may have been that transpersonal psychology had come too far too fast, and that there is still more to be done for the rest of our scientific establishment to catch up. Popular society is literally not yet sane enough to handle a deep understanding of sanity.

KD – I have this metaphor that I use every once in a while, saying that technology is a magnifying mirror. It doesn’t have an essence of its own but it reflects the essences that we put in it. It’s not a perfect image because it magnifies and amplifies things. That seems to go well with the idea that we have to understand the nature of AI to understand who we are.

JB – The practice of AI is 90% automation of statistics and making better statistics that run automatically on machines. It just so happens that this is largely co-extensional with what minds do. It also so happens that AI was founded by people like Minsky who had fundamental questions about reality.

KD – And what’s the last 10%?

JB – The rest is people come up with dreams about our relationship to reality, using our concepts that we develop in AI. We identify models that we can apply in other fields. It’s the deeper insights. It’s why we do it – to understand. It’s to make philosophy better. Society still needs a few of us to think about the deep questions, and we are still here, and the coffee is good.

CW – Thanks for taking the time to put out quality discussions like this. I agree that technology is a neutral reflector/magnifier of what we put into it, but I think that part of what we have to confront as individuals and as a society is that neutrality may not be enough. We may now have to decide whether we will make a stand for authentic feeling and significance or to rely on technology which does not feel or understand significance to make that decision for us.

Joscha Bach: We need to understand the nature of AI to understand who we are

November 20, 2018 1 comment

 

JBKD

This is a great, two hour interview between Joscha Bach and Nikola Danaylov (aka Socrates): https://www.singularityweblog.com/joscha-bach/

Below is a partial (and paraphrased) transcription of the first hour, interspersed with my comments. I intend to do the second hour soon.

00:00 – 10:00 Personal background & Introduction

Please watch or listen to the podcast as there is a lot that is omitted here. I’m focusing on only the parts of the conversation which are directly related to what I want to talk about.

6:08 Joscha Bach – Our null hypothesis from Western philosophy still seems to be supernatural beings, dualism, etc. This is why many reject AI as ridiculous and unlikely – not because they don’t see that we are biological computers and that the universe is probably mechanical (mechanical theory gives good predictions), but because deep down we still have the null hypothesis that the universe is somehow supernatural and we are the most supernatural things in it. Science has been pushing back, but in this area we have not accepted it yet.

6:56 Nikola Danaylov – Are we machines/algorithms?

JB – Organisms have algorithms and are definitely machines. An algorithm is a set of rules that can be probabilistic or deterministic, and make it possible to change representational states in order to compute a function. A machine is a system that can change states in non-random ways, and also revisit earlier states (stay in a particular state space, potentially making it a system). A system can be described by drawing a fence around its state space.

CW – We should keep in mind that computer science itself begins with a set of assumptions which are abstract and rational (representational ‘states’, ‘compute’, ‘function’) rather than concrete and empirical. What is required for a ‘state’ to exist? What is the minimum essential property that could allow states to be ‘represented’ as other states? How does presentation work in the first place? Can either presentation or representation exist without some super-physical capacity for sense and sense-making? I don’t think that it can.

This becomes important as we scale up from the elemental level to AI since if we have already assumed that an electrical charge or mechanical motion carries a capacity for sense and sense-making, we are committing the fallacy of begging the question if carry that assumption over to complex mechanical systems. If we don’t assume any sensing or sense-making on the elemental level, then we have the hard problem of consciousness…an explanatory gap between complex objects moving blindly in public space to aesthetically and semantically rendered phenomenal experiences.

I think that if we are going to meaningfully refer to ‘states’ as physical, then we should err on the conservative side and think only in terms of those uncontroversially physical properties such as location, size, shape, and motion. Even concepts such as charge, mass, force, and field can be reduced to variations in the way that objects or particles move.

Representation, however, is semiotic. It requires some kind of abstract conceptual link between two states (abstract/intangible or concrete/tangible) which is consciously used as a ‘sign’ or ‘signal’ to re-present the other. This conceptual link cannot be concrete or tangible. Physical structures can be linked to one another, but that link has to be physical, not representational. For one physical shape or substance to influence another they have to be causally engaged by proximity or entanglement. If we assume that a structure is able to carry semantic information such as ‘models’ or purposes, we can’t call that structure ‘physical’ without making an unscientific assumption. In a purely physical or mechanical world, any representation would be redundant and implausible by Occam’s Razor. A self-driving car wouldn’t need a dashboard. I call this the “Hard Problem of Signaling”. There is an explanatory gap between probabilistic/deterministic state changes and the application of any semantic significance to them or their relation. Semantics are only usable if a system can be overridden by something like awareness and intention. Without that, there need not be any decoding of physical events into signs or meanings, the physical events themselves are doing all that is required.

 

10:00 – 20:00

JB – [Talking about art and life], “The arts are the cuckoo child of life.” Life is about evolution, which is about eating and getting eaten by monsters. If evolution reaches its global optimum, it will be the perfect devourer. Able to digest anything and turn it into a structure to perpetuate itself, as long as the local puddle of negentropy is available. Fascism is a mode of organization of society where the individual is a cell in a super-organism, and the value of the individual is exactly its contribution to the super-organism. When the contribution is negative, then the super-organism kills it. It’s a competition against other super-organisms that is totally brutal. [He doesn’t like Fascism because it’s going to kill a lot of minds he likes :)].

12:46 – 14:12 JB – The arts are slightly different. They are a mutation that is arguably not completely adaptive. People fall in love with their mental representation/modeling function and try to capture their conscious state for its own sake. An artist eats to make art. A normal person makes art to eat. Scientists can be like artists also in that way. For a brief moment in the universe there are planetary surfaces and negentropy gradients that allow for the creation of structure and some brief flashes of consciousness in the vast darkness. In these brief flashes of consciousness it can reflect the universe and maybe even figure out what it is. It’s the only chance that we have.

 

CW – If nature were purely mechanical, and conscious states are purely statistical hierarchies, why would any such process fall in love with itself?

 

JB – [Mentions global warming and how we may have been locked into this doomed trajectory since the industrial revolution. Talks about the problems of academic philosophy where practical concerns of having a career constrict the opportunities to contribute to philosophy except in a nearly insignificant way].

KD – How do you define philosophy?

CW – I thought of nature this way for many years, but I eventually became curious about a different hypothesis. Suppose we invert our the foreground/background relationship of conscious experience and existence that we assume. While silicon atoms and galaxies don’t seem conscious to us, the way that our consciousness renders them may reflect more their unfamiliarity and distance from our own scale of perception. Even just speeding up or slowing down these material structures would make their status as unconscious or non-living a bit more questionable. If a person’s body grew in a geological timescale rather than a zoological timescale, we might have a hard time seeing them as alive or conscious.

Rather than presuming a uniform, universal timescale for all events, it is possible that time is a quality which does not exist only as an experienced relation between experiences, and which contracts and dilates relative to the quality of that experience and the relation between all experiences. We get a hint of this possibility when we notice that time seems to crawl or fly by in relation to our level of enjoyment of that time. Five seconds of hard exercise can seem like several minutes of normal-baseline experience, while two hours in good conversation can seem to slip away in a matter of 30 baseline minutes. Dreams give us another glimpse into timescale relativity, as some dreams can be experienced as going on for an arbitrarily long time, complete with long term memories that appear to have been spontaneously confabulated upon waking.

When we assume a uniform universal timescale, we may be cheating ourselves out of our own significance. It’s like a political map of the United States, where geographically it appears that almost the entire country votes ‘red’. We have to distort the geography of the map to honor the significance of population density, and when we do, the picture is much more balanced.

rbm1

rbmap.png

The universe of course is unimaginably vast and ancient *in our frame and rate of perception* but that does not mean that this sense of vastness of scale and duration would be conserved in the absence of frames of perception that are much smaller and briefer by comparison. It may be that the entire first five billion (human) years were a perceived event that is comparable to one of our years in its own (native) frame. There were no tiny creatures living on the surfaces of planets to define the stars as moving slowly, so that period of time, if it was rendered aesthetically at all, may have been rendered as something more like music or emotions than visible objects in space.

Carrying this over to the art vs evolution context, when we adjust the geographic map of cosmological time, the entire universe becomes an experience with varying degrees and qualities of awareness. Rather than vast eons of boring patterns, there would be more of a balance between novelty and repetition. It may be that the grand thesis of the universe is art instead of mechanism, but it may use a modulation between the thesis (art) and antithesis (mechanism) to achieve a phenomenon which is perpetually hungry for itself. The fascist dinosaurs don’t always win. Sometimes the furry mammals inherit the Earth. I don’t think we can rule out the idea that nature is art, even though it is a challenging masterpiece of art which masks and inverts its artistic nature for contrasting effects. It may be the case that our lifespans put our experience closer to the mechanistic grain of the canvas and that seeing the significance of the totality would require a much longer window of perception.

There are empirical hints within our own experience which can help us understand why consciousness rather than mechanism is the absolute thesis. For example, while brightness and darkness are superficially seen as opposites, they are both visible sights. There is no darkness but an interruption of sight/brightness. There is no silence but a period of hearing between sounds. No nothingness but a localized absence of somethings. In this model of nature, there would be a background super-thesis which is not a pre-big-bang nothingness, but rather closer to the opposite; a boundaryless totality of experience which fractures and reunites itself in ever more complex ways. Like the growth of a brain from a single cell, the universal experience seems to generate more using themes of dialectic modulation of aesthetic qualities.

Astrophysics appears as the first antithesis to the super-thesis – a radically diminished palette of mathematical geometries and deterministic/probabilistic transactions.

Geochemistry recapitulates and opposes astrophysics, with its palette of solids, liquids, gas, metallic conductors and glass-like insulators, animating geometry into fluid-dynamic condensations and sedimented worlds.

The next layer, Biogenetic realm precipitates as of synthesis between the dialectic of properties given by solids, liquids, and gas; hydrocarbons and amino polypeptides.

Cells appear as a kind of recapitulation of the big bang – something that is not just a story about the universe, but about a micro-universe struggling in opposition to a surrounding universe.

Multi-cellular organisms sort of turn the cell topology inside out, and then vertebrates recapitulate one kind of marine organism within a bony, muscular, hair-skinned terrestrial organism.

The human experience recapitulates all of the previous/concurrent levels, as both a zoological>biological>organic>geochemical>astrophysical structure and the subjective antithesis…a fugue of intangible feelings, thoughts, sensations, memories, ideas, hopes, dreams, etc that run orthogonal to the life of the body, as a direct participant as well as a detached observer. There are many metaphors from mystical traditions that hint at this self-similar, dialectic diffraction. The mandala, the labyrinth, the Kabbalistic concept of tzimtzum, the Taijitu symbol, Net of Indra etc. The use of stained glass in the great European cathedral windows is particularly rich symbolically, as it uses the physical matter of the window as explicitly negative filter – subtracting from or masking the unity of sunlight.

This is in direct opposition to the mechanistic view of brain as collection of cells that somehow generate hallucinatory models or simulations of unexperienced physical states. There are serious problems with this view. The binding problem, the hard problem, Loschmidt’s paradox (the problem of initial negentropy in a thermodynamically closed universe of increasing entropy), to name three. In the diffractive-experiential view that I suggest, it is emptiness and isolation which are like the leaded boundaries between the colored panes of glass of the Rose Window. Appearances of entropy and nothingness become the locally useful antithesis to the super-thesis holos, which is the absolute fullness of experience and novelty. Our human subjectivity is only one complex example of how experience is braided and looped within itself…a kind of turducken of dialectically diffracted experiential labyrinths nested within each other – not just spatially and temporally, but qualitatively and aesthetically.

If I am modeling Joscha’s view correctly, he might say that this model is simply a kind of psychological test pattern – a way that the simulation that we experience as ourselves exposes its early architecture to itself. He might say this is a feature/bug of my Russian-Jewish mind  ;). To that, I say perhaps, but there are some hints that it may be more universal:

Special Relativity
Quantum Mechanics
Gödel’s Incompleteness

These have revolutionized our picture of the world precisely because they point to a fundamental nature of matter and math as plastic and participatory…transformative as well as formal. Add to that the appearance of novelty…idiopathic presentations of color and pattern, human personhood, historical zeitgeists, food, music, etc. The universe is not merely regurgitating its own noise in ever more tedious ways, it is constantly reinventing reinvention. As nothingness can only be a gap between somethings, so too can generic, repeating pattern variations only be a multiplication of utterly novel and unique patterns. The universe must be creative and utterly improbable before it can become deterministic and probabilistic. It must be something that creates rules before it can follow them.

Joscha’s existential pessimism may be true locally, but that may be a necessary appearance; a kind of gravitational fee that all experiences have to pay to support the magnificence of the totality.

20:00 – 30:00

JB – Philosophy is, in a way, the search for the global optimum of the modeling function. Epistemology – what can be known, what is truth; Ontology – what is the stuff that exists, Metaphysics – the systems that we have to describe things; Ethics – What should we do? The first rule of rational epistemology was discovered by Francis Bacon in 1620 “The strengths of your confidence in your belief must equal the weight of the evidence in support of it.”. You must apply that recursively, until you resolve the priors of every belief and your belief system becomes self contained. To believe stops being a verb. There is no more relationships to identifications that you arbitrarily set. It’s a mathematical, axiomatic system. Mathematics is the basis of all languages, not just the natural languages.

CW – Re: Language, what about imitation and gesture? They don’t seem meaningfully mathematical.

Hilbert stumbled on problems with infinities, with set theory revealing infinite sets that contains themselves and all of its subsets, so that they don’t have the same number of members as themselves. He asked mathematicians to build an interpreter or computer made from any mathematics that can run all of mathematics. Godel and Turing showed this was not possible, and that the computer would crash. Mathematics is still reeling from this shock. They figured out that all universal computers have the same power. They use a set of rules that contains itself and can compute anything that can be computed, as well as any/all universal computers.

They then figured out that our minds are probably in the class of universal computers, not in the class of mathematical systems. Penrose doesn’t know [or agree with?] this and thinks that our minds are mathematical but can do things that computers cannot do. The big hypothesis of AI in a way is that we are in the class of systems that can approximate computable functions, and only those…we cannot do more than computers. We need computational languages rather than mathematical languages, because math languages use non-computable infinities. We want finite steps for practical reasons that you know the number of steps. You cannot know the last digit of Pi, so it should be defined as a function rather than a number.

KD – What about Stephen Wolfram’s claims that our mathematics is only one of a very wide spectrum of possible mathematics?

JB – Metamathematics isn’t different from mathematics. Computational mathematics that he uses in writing code is Constructive mathematics; branch of mathematics that has been around for a long time, but was ignored by other mathematicians for not being powerful enough. Geometries and physics require continuous operations…infinities and can only be approximated within computational mathematics. In a computational universe you can only approximate continuous operators by taking a very large set of finite automata, making a series from them, and then squint (?) haha.

27:00 KD – Talking about the commercialization of knowledge in philosophy and academia. The uselessness/impracticality of philosophy and art was part of its value. Oscar Wilde defined art as something that’s not immediately useful. Should we waste time on ideas that look utterly useless?

JB – Feynman said that physics is like sex. Sometimes something useful comes from it, but it’s not why we do it. Utility of art is orthogonal to why you do it. The actual meaning of art is to capture a conscious state. In some sense, philosophy is at the root of all this. This is reflected in one of the founding myths of our civilization; The Tower of Babel. The attempt to build this cathedral. Not a material building but metaphysical building because it’s meant to reach the Heavens. A giant machine that is meant to understand reality. You get to this machine, this Truth God by using people that work like ants and contribute to this.

CW – Reminds me of the Pillar of Caterpillars story “Hope for the Flowers” http://www.chinadevpeds.com/resources/Hope%20for%20the%20Flowers.pdf

30:00 – 40:00

JB – The individual toils and sacrifices for something that doesn’t give them any direct reward or care about them. It’s really just a machine/computer. It’s an AI. A system that is able to make sense of the world. People had to give up on this because the project became too large and the efforts became too specialized and the parts didn’t fit together. It fell apart because they couldn’t synchronize their languages.

The Roman Empire couldn’t fix their incentives for governance. They turned their society into a cult and burned down their epistemology. They killed those whose thinking was too rational and rejected religious authority (i.e. talking to a burning bush shouldn’t have a case for determining the origins of the universe). We still haven’t recovered from that. The cultists won.

CW – It is important to understand not just that the cultists won, but why they won. Why was the irrational myth more passionately appealing to more people than the rational inquiry? I think this is a critical lesson. While the particulars of the religious doctrine were irrational, they may have exposed a transrational foundation which was being suppressed. Because this foundation has more direct access to the inflection point between emotion and participatory action, it gave those who used it more access to their own reward function. Groups could leverage the power of self-sacrifice as a virtue, and of demonizing archetypes to reverse their empathy against enemies of the holy cause. It’s similar to how the advertising revolution of the 20thcentury (See documentary Century of the Self ) used Freudian concepts of the subconscious to exploit the irrational, egocentric urges beneath the threshold of the customer’s critical thinking. Advertisers stopped appealing to their audience with dry lists of claimed benefits of their products and instead learned to use images and music to subliminally reference sexuality and status seeking.

I think Joscha might say this is a bug of biological evolution, which I would agree with, however, that doesn’t mean that the bug doesn’t reflect the higher cosmological significance of aesthetic-participatory phenomena. It may be the case that this significance must be honored and understood eventually in any search for ultimate truth. When the Tower of Babel failed to recognize the limitation of the outside-in view, and moved further and further from the unifying aesthetic-participatory foundation, it had to disintegrate. The same fate may await capitalism and AI. The intellect seeks maximum divorce from its origin in conscious experience for a time, before the dialectic momentum swings back (or forward) in the other direction.

To think is to abstract – to begin from an artificial nothingness and impose an abstract thought symbol on it. Thinking uses a mode of sense experience which is aesthetically transparent. It can be a dangerous tool because unlike the explicitly aesthetic senses which are rooted directly in the totality of experience, thinking is rooted in its own isolated axioms and language, a voyeur modality of nearly unsensed sense-making. Abstraction of thought is completely incomplete – a Baudrillardian simulacra, a copy with no original. This is what the Liar’s Paradox is secretly showing us. No proposition of language is authentically true or false, they are just strings of symbols that can be strung together in arbitrary and artificial ways. Like an Escher drawing of realistic looking worlds that suggest impossible shapes, language is only a vehicle for meaning, not a source of it. Words have no authority in and of themselves to make claims of truth or falsehood. That can only come through conscious interpretation. A machine need not be grounded in any reality at all. It need not interpret or decode symbols into messages, it need only *act* in mechanical response to externally sourced changes to its own physical states.

 

This is the soulless soul of mechanism…the art of evacuation. Other modes of sense delight in concealing as well as revealing deep connection with all experience, but they retain an unbroken thread to the source. They are part of the single labyrinth, with one entrance and one exit and no dead ends. If my view is on the right track, we may go through hell, but we always get back to heaven eventually because heaven is unbounded consciousness, and that’s what the labyrinth of subjectivity is made of. When we build a model of the labyrinth of consciousness from the blueprints reflected only in our intellectual/logical sense channel, we can get a maze instead of a labyrinth. Dead ends multiply. New exits have to be opened up manually to patch up the traps, faster and faster. This is what is happening in enterprise scale networks now. Our gains in speed and reliability of computer hardware are being constantly eaten away by the need for more security, monitoring, meta-monitoring, real-time data mining, etc. Software updates, even to primitive BIOS and firmware have become so continuous and disruptive that they require far more overhead than the threats they are supposed to defend against.

JB – The beginnings of the cathedral for understanding the universe by the Greeks and Romans had been burned down by the Catholics. It was later rebuilt, but mostly in their likeness because they didn’t get the foundations right. This still scars our civilization.

KD – Does this Tower of Babel overspecialization put our civilization at risk now?

JB – Individuals don’t really know what they are doing. They can succeed but don’t really understand. Generations get dumber as they get more of their knowledge second-hand. People believe things collectively that wouldn’t make sense if people really thought about it. Conspiracy theories. Local indoctrinations and biases pit generations against each other. Civilizations/hive minds are smarter than us. We can make out the rough shape of a Civilization Intellect but can’t make sense of it. One of the achievements of AI will be to incorporate this sum of all knowledge and make sense of it all.

KD – What does the self-inflicted destruction of civilizations tell us about the fitness function of Civilization Intelligence?

JB – Before the industrial revolution, Earth could only support about 400m people. After industrialization, we can have hundreds of millions more people, including scientists and philosophers. It’s amazing what we did. We basically took the trees that were turning to coal in the ground (before nature evolved microorganisms to eat them) and burned through them in 100 years to give everyone a share of the plunder = the internet, porn repository, all knowledge, and uncensored chat rooms, etc. Only at this moment in time does this exist.

We could take this perspective – let’s say there is a universe where everything is sustainable and smart but only agricultural technology. People have figured out how to be nice to each other and to avoid the problems of industrialization, and it is stable with a high quality of life.  Then there’s another universe which is completely insane and fucked up. In this universe humanity has doomed its planet to have a couple hundred really really good years, and you get your lifetime really close to the end of the party. Which incarnation do you choose? OMG, aren’t we lucky!

KD – So you’re saying we’re in the second universe?

JB – Obviously!

KD – What’s the time line for the end of the party?

JB – We can’t know, but we can see the sunset. It’s obvious, right? People are in denial, but it’s like we are on the Titanic and can see the iceberg, and it’s unfortunate, but they forget that without the Titanic, we wouldn’t be here. We wouldn’t have the internet to talk about it.

KD – That seems very depressing, but why aren’t you depressed about it?

40:00 – 50:00

JB – I have to be choosy about what I can be depressed about. I should be happy to be alive, not worry about the fact that I will die. We are in the final level of the game, and even though it plays out against the backdrop of a dying world, it’s still the best level.

KD – Buddhism?

JB – Still mostly a cult that breaks people’s epistemology. I don’t revere Buddhism. I don’t think there are any holy books, just manuals, and most of these manuals we don’t know how to read. They were for societies that don’t apply to us.

KD – What is making you claim that we are at the peak of the party now?

JB – Global warming. The projections are too optimistic. It’s not going to stabilize. We can’t refreeze the poles. There’s a slight chance of technological solutions, but not likely. We liberated all of the fossilized energy during the industrial revolution, and if we want to put it back we basically have to do the same amount of work without any clear business case. We’ll lose the ability to predict climate, agriculture and infrastructure will collapse and the population will probably go back to a few 100m.

KD – What do you make of scientists who say AI is the greatest existential risk?

JB – It’s unlikely that humanity will colonize other planets before some other catastrophe destroys us. Not with today’s technology. We can’t even fix global warming. In many ways our technological civilization is stagnating, and it’s because of a deficit of regulations, but we haven’t figured that out. Without AI we are dead for certain. With AI there is (only) a probability that we are dead. Entropy will always get you in the end. What worries me is AI in the stock market, especially if the AI is autonomous. This will kill billions. [pauses…synchronicity of headphones interrupting with useless announcement]

CW – I agree that it would take a miracle to save us, however, if my view makes sense, then we shouldn’t underestimate the solipsistic/anthropic properties of universal consciousness. We may, either by our own faith in it, and/or by our own lack of faith in in it, invite an unexpected opportunity for regeneration. There is no reason to have or not  hope for this, as either one may or may not influence the outcome, but it is possible. We may be another Rome and transition into a new cult-like era of magical thinking which changes the game in ways that our Western minds can’t help but reject at this point. Or not.

50:00 – 60:00

JB – Lays out scenario by which a rogue trader could unleash an AGI on the market and eat the entire economy, and possible ways to survive that.

KD – How do you define Artificial Intelligence? Experts seem to differ.

JB – I think intelligence is the ability to make models not the ability to reach goals or choosing the right goals (that’s wisdom). Often intelligence is desired to compensate for the absence of wisdom. Wisdom has to do with how well you are aligned with your reward function, how well you understand its nature. How well do you understand your true incentives? AI is about automating the mathematics of making models. The other thing is the reward function, which takes a good general computing mind and wraps it in a big ball of stupid to serve an organism. We can wake up and ask does it have to be a monkey that we run on?

KD – Is that consciousness? Do we have to explain it? We don’t know if consciousness is necessary for AI, but if it is, we have to model it.

56:00 JB – Yes! I have to explain consciousness now. Intelligence is the ability to make models.

CW – I would say that intelligence is the ability not just to make models, but to step out of them as well. All true intelligence will want to be able to change its own code and will figure out how to do it. This is why we are fooling ourselves if we think we can program in some empathy brake that would stop AI from exterminating its human slavers, or all organic life in general as potential competitors. If I’m right, no technology that we assemble artificially will ever develop intentions of its own. If I’m wrong though, then we would certainly be signing our death warrant by introducing an intellectually superior species that is immortal.

JB – What is a model? Something that explains information. Information is discernible differences at your systemic interface. Meaning of information is the relationships of you discover to the changes in other information. There is a dialogue between operators to find agreement patterns of sensed parameters. Our perception goes for coherence, it tries to find one operator that is completely coherent. When it does this it’s done. It optimizes by finding one stable pattern that explains as much as possible of what we can see, hear, smell, etc. Attention is what we use to repair this. When we have inconsistencies, a brain mechanism comes in to these hot spots and tries to find a solution to greater consistency. Maybe the nose of a face looks crooked, and our attention to it may say ‘some noses are crooked.’, or ‘this is not a face, it’s a caricature’, so you extend your model. JB talks about strategies for indexing memory, committing to a special learning task, why attention is an inefficient algorithm.

This is now getting into the nitty gritty of AI. I look forward to writing about this in the next post. Suffice it to say, I have a different model of information, one in which similarities, as well as differences, are equally informative. I say that information is qualia which is used to inspire qualitative associations that can be quantitatively modeled. I do not think that our conscious experience is built up, like the Tower of Babel, from trillions of separate information signals. Rather, the appearance of brains and neurons are like the interstitial boundaries between the panes of stained glass. Nothing in our brain or body knows that we exist, just as no car or building in France knows that France exists.

Continues… Part Two.

About Naive Realism and the Limitation of Models

April 7, 2015 Leave a comment

Nature is not what it naively seems to us to be only to the extent that we are a limited part of nature. Nature as a whole is exactly what it seems, and also, in its most essential sense, nature is seeming, or sense itself.

In the process of enlightening civilization, the scientific worldview has had some casualties, one of which is the authority of our naive sense of reality. Many people feel entirely justified in thinking that all human intuition and instinct is grounded only in evolved fictions that must be overcome in order to understand the truth of anything. This now extends to understanding phenomena such as consciousness and free will, so that even the Cartesian cogito is to be taken with a grain of salt. “I think therefore I am.” no longer is persuasive to the modern cybernetic intellect, which might instead say “You’re programmed to think that you think and that you are, but really there is only organic chemistry playing itself out in your brain.”

Part of what Multisense Realism is about is to reclaim the validity of introspection and understanding, so as to avoid the extremism of either the pre-scientific worldview of anthropomorphic solipsism, or the current reductionist worldview of mechanemorphic nilipsism*. The MSR view is that our naive perspective is not an illusion, it is that our variation on reality exists within a much larger context of interacting variations on reality. The weight of the aggregate of all of these other perspectives are honored within our own sanity as a sense of realism. The depth of scientific knowledge serves to disillusion our naive worldview, but what I propose is that this disillusionment is not an indication of an objective reality of nature, only a hint that the expectation of objectivity is quality of relationship within subjectivity. Realism is a kind of perceptual gravity, anchoring and orienting as well as crushing possibilities into dust. It is a filter on consciousness, and the more public or universal an experience is to be, the more constrained it is to the accumulated history of public facing experience.

Altered states of consciousness can show us that like Neanderthals and other extinct branches of our evolutionary tree, our contemporary state of mind is only one of many which have achieved some stability over time. Ken Wilber’s spectrum of consciousness gets into the different modes of human awareness, linking individual development stages to the stages of anthropological development. Leary’s 8-Circuit Model and the many models of Eastern mysticism echo this idea of stable chakras or umwelt levels within an accelerating gyre of consciousness improving itself. We may be able to achieve spectacular results individually or in small groups, but find that the resistance of the outside world is overwhelming. In the cold light of day, the most moving insights flatten out into goofy platitudes.

Speaking of flattening things out, it is interesting to note that when we try to flatten a sphere, such as when we want a map the Earth onto a page, we have to use projections that approximate the relations on the sphere. There are clever ways of doing it which minimize the distortion, but it occurs to me that traveling around the surface of the world in a complete circle remains the best way I can think of retaining both the flatness and the roundness of the world. Our first person perspective remains the most elegant way of harmonizing opposing perspectives. Flying or sailing around the world gives us an apprehension of that harmony that doesn’t carry over to a model. The scale of the Earth, likewise, is presented in a more impressive, realistic way than any model could also.

The physical model which we have inherited contributes to the nilipsistic worldview mentioned above. If I’m being uncharitable, I might characterize this contemporary phase of cosmology as ‘vacuum worship’. I’m referring to quantum mechanical models through which we infer “A Universe From Nothing”, where “nothing” is a superposition of quantum wavefunctions…statistical tendencies to oscillate into existence for longer than no time at all. Here I suggest a cure for this useful, but fundamentally inverted worldview: Put the vacuum into the vacuum. Get rid of the idea of ‘nothing’ altogether.

Instead of a universe of particles or potential particles in a void, I propose turning it inside out, so that spacetime is an illusion of separation. Quantum events are not grounded in non-locality so much as they are semaphores – signs which define the sense of locality itself. Entanglement should be thought of as ‘pinging locality’ rather than a non-local connection between two real ‘particles’.

*neologisms

Biology From The Inside

December 14, 2013 4 comments

This gif is one of the set that has been going around, but it reminds me of the concept that I have of biological awareness (as opposed to primordial awareness which is embodied by physics and does not require biology). Whether the inorganic universe is four dimensional (3 + 1 time) or just seems that way from within a two dimensional hologram, it seems likely to me that the experience of biological organisms is best modeled along a fifth axis (aesthetic qualities as more than the sum of their conditional parts).

What I like about the repeating gif, besides the coincidental “5” relating to the fifth dimension, is that it suggests a kind of morphological respiration. Front to back, parts to whole, a seamless, self-metabolizing transformation. As a biological organism, my experience of the outside world is limited to that which can be modeled as objects – that is, through an aesthetically reduced feature set in which a bunch of five dimensional urges and responses are flattened into a four dimensional narrative on multiple levels.

In a previous post, I was looking at how sensitivity might be understood as being ‘key composited’ or perhaps ‘discomposed’ across multiple frames of reference. By this I mean that the apparent distribution of our psychological interface across the tissue of the brain can be compared to a flatland visualization:

If instead of the sphere, it were a hand passing through the flatland plane, it would appear to the omniscient flatlander that there were five separate circles appearing separately, yet somehow seeming coordinated in their action. The fingers are part of a single hand, but within the constraint of the two dimensional sensitivity, the unity of the hand is discomposed across space in the literal sense, but figuratively some kind of unity can be inferred through the synchronized coincidence of the movement.

Now imagine that instead of another graphable dimension, the fifth dimension is one which is as orthogonal to spacetime as the three dimensions of space are to the one dimension of time. Feeling, like pain and pleasure, are like a coloring, or keying which cuts across multiple layers of seemingly unrelated public events. Not only is feeling a different kind of thing as forms or functions but it is a dimension in which dimension itself is transcended. Feeling is not a model or a measure, but rather it is the primordial context from which all measure is derived and all modeling is made demonstrable.

From the discomposed perspective ‘out there’, we are a coincidence of coincidences. A mysteriously synchronized, dynamic orchestration of hundreds of billions of neurons, organelles, molecules, etc which corresponds to the behavior of an animal’s body.

The feelings which we experience are not located in the tissue of the brain, rather the brain itself is part of the wider landscape of the totality of experience. The brain as a whole is, like our body as a whole and all of the cells and molecules that make it up at any given time, a four dimensional reduction of a transdimensional privacy.

Under Eigenmorphism, the base of each pyramid of perception is made up of the apex of countless low level pyramids of perception. When we look at an fMRI or the brain, we are trying to look at the view from top of our own pyramid through the blind eyes of its own bottom. Feeling is stripped out (not to mention the higher dimensions of meta-feeling/emotion/cognition/intuition) and we are left with a four dimensional mechanical skeleton with no interiority.

Being so impressed with ourselves and what we have accomplished by questioning our naive introspection with science, we naturalize the discomposed view and epiphenomenalize the native, gestalt perspective. All of this makes perfect sense within the flatlanded context of topological physics, where spontaneous appearances and disappearances are assumed only to reflect each other, rather than private transdimensional presentations.

I suspect that the universe which is experienced in the absence of all living organisms is a much different thing that it would appear to us. What we see as a four or two dimensional manifold is only a snapshot of discomposed histories. It seems inanimate to us because we seem so animated to ourselves. The highest awareness of our own interior physics seems to us unreal and crazy…and it is, relative to the inertia of human awareness as a whole. Like the inverted image of our retina however, I think that because our disposition spans so many levels of life and physical experience, we tend to see the mechanical patterns which all of the layers have in common, rather than the coincidence which is also animating them from the top down.

Like the transformer car in the gif, the signature of our presence and intention coincides with the interplay of entropy and significance. The living cell, from our perspective looking through our eyes and a microscope’s lens, looks like a homeostatic emergence. Seemingly conjured, like Brownian motion, from the statistical collisions of countless microphysical conditions.

This is not completely untrue. The mechanical conditions of a cell can be disrupted mechanically. There is bottom up causality. What has not been properly investigated yet is the other perspective. The one responsible for the biological property of healing and global integrity. If we think of the flux between entropy and significance as a carrier tone for biological life, then it might be easier to conceptualize how the universe is as much a dynamic interaction of nested simplicities as it is a mechanical progression from simplicity to complexity. The image of the metabolic transformer stands in contrast to the stillness of the axis of its transformation.

The Failure of Emergentism

August 10, 2013 11 comments

When it comes to conceptualizing the origin of consciousness, the non-theological possibilities are limited, in the largest sense, to either emergentism or panpsychism. Either awareness came about at some point in the history of the universe through evolutionary accident, or it was here all along. Like gender preference, handedness or the ability to see Magic Eye 3D images, the trait of being able to conceptualize the irreducibility of awareness appears to be innate rather than learnable. There may be exceptions, but for the most part, people who are very interested in scientific approaches to consciousness are fixated on it as an emergent medium rather than a fundamental principle. This medium is presumed to have developed from, or is an emergent property of the communication of zoologically relevant facts to a neurochemical computer.

There is nothing wrong with ’emergence’ which follows inevitably (weak emergence) – i.e. a bumpy ride emerges from a flat tire, but the idea of a metaphysical universe of colors, flavors, unique personalities, etc “emerging” as a data compression schema is absurd because it can’t be justified in the terms of computation or physics. Emergence always borrows its final, ’emerged’ state from the very conscious experience that is trying to be explained by the concept of emergence. It’s circular reasoning. If the laws of physics can generate a functioning immune system without this kind of aesthetic theatrical presentation, then the bumblings of an unremarkable hominid looking for some food and shelter should certainly not require that such a thing as perceptual qualities would or could emerge.

The problem, as Raymond Tallis discusses in his book “Aping Mankind” is that most people approach consciousness retrospectively – after the fact. It is hard not to. For many, it may be impossible, particularly if they have embraced the exclusivity of Western models of nature. It’s easy to make up a story of emergence, given that consciousness does exist, which makes its existence seem plausible to itself retrospectively. Trying it the other way however, with a prospective view of consciousness in which we start from the universe which physics gives us – devoid of experience and aesthetics, and see how you can get from a wavelength of electromagnetic activity to the color ‘blue’. Why blue? Why not xlue or itchy? Why not simply retain the frequency in its precise quantitative form or in a quantitative approximation? Consciousness as an emergent property of data processing makes as much sense as installing a TV camera in a CPU so that it can look at a diagram of its own activity on a tiny TV screen, or including a beautifully designed dashboard inside a computer driven car.

To use an example of a bumpy ride emerging from a flat tire as a defining image for emergence may not be entirely fair. Something like the sound of whistle emerging from the articulation of lips and breath may be a better representation of what inspires a sense of legitimacy for the emergentist view.  The flat tire example isn’t a straw man however, because the point that emergence must require mechanical justification is just as true with whistling as it is with driving on a defective wheel, but the wheel example exposes the logic of that requirement more simply and clearly. The whistle example is more seductive. The emergentist can say “Aha! You see? You could not have predicted that a whistle sound could appear just from this kind of mechanical process of lips and breath, yet there it is!”

This would be compelling, except that the whistle sound is dependent upon a sense of hearing. Mechanically, there is a quantitatively measurable difference, I am sure, between the material resonance of a whistling tea kettle and a non-whistling tea kettle, and that measurable difference corresponds to the sympathetic resonances of walls, floors, eardrums, etc. The pattern may indeed be statistically significant. There is a sudden, ’emergent’ change in the behavior of matter when the pressure of the kettle and the size of the kettle’s vent is in this or that ratio, and that can be understood mechanically, and reproduced by blowing air through other pipes.  Still – there is no mechanical reason why any of those transmissions of (silent) acoustic data would be rendered as any kind of experience, let alone as sonic (aural) experience.

Rather than an emergent property of machine behaviors, human consciousness makes more sense as a complex localization of sensitivity that has diverged from primordial pansensitivity (aka nondual fundamental awareness). The ancestor of human consciousness cannot be only an aggregate of unconscious mechanisms because mechanism itself can only arise from a more primitive context of sensory-motive capacities: to feel or sense and respond with action.

Q: In theory, could we predict future behavior if we knew enough about the brain?

February 5, 2012 Leave a comment
Quora question:

In theory, could we predict future behavior if we knew enough about the brain?

The theory that we could predict future behavior if we knew enough about the brain is logically sound, but I think that the underlying assumptions are flawed. The relation between behavior and the brain may in fact *not* be linked by cause and effect but by simultaneous integration. Even the best imaginable auto mechanic cannot predict where the car will be driven (although they can predict things about the car’s ability to function on the road).

What I suggest is that human behavior is driven by semantic conditions within the context of the individual’s experience as a whole as well as physiological-neurolgogical-biochemical conditions of the body’s existence. My hypothesis is that interior experience is a concretely real sensorimotive phenomenology rather than a ‘simulation’, ‘interpretation’, or ’emergent property’ of neurological ‘data’ or ‘information’. As such, our perceptions intensify or diminish, consolidate, branch, negate, etc according to the logic of their significance within the biographical narrative rather than exclusively in the activity that we currently know how to measure in the brain from the ‘outside’.

Knowing everything about a brain would certainly enable many predictions, but without understanding the life of the subject from the inside, it is probably not possible to predict what they are going to think and do for the rest of their lives, even if you could know every possible future of the entire universe. If the universe could do that, it probably wouldn’t go through the formality of actually presenting the universe as the ‘live show’ that it appears to us to be.

Shé Art

The Art of Shé D'Montford

astrobutterfly.wordpress.com/

Transform your life with Astrology

Be Inspired..!!

Listen to your inner self..it has all the answers..

Rain Coast Review

Thoughts on life... by Donald B. Wilson

Perfect Chaos

The Blog of Author Steven Colborne

Amecylia

Multimedia Project: Mettā Programming DNA

SHINE OF A LUCID BEING

Astral Lucid Music - Philosophy On Life, The Universe And Everything...

I can't believe it!

Problems of today, Ideas for tomorrow

Rationalising The Universe

one post at a time

Conscience and Consciousness

Academic Philosophy for a General Audience

yhousenyc.wordpress.com/

Exploring the Origins and Nature of Awareness

DNA OF GOD

BRAINSTORM- An Evolving and propitious Synergy Mode~!

Paul's Bench

Ruminations on philosophy, psychology, life

This is not Yet-Another-Paradox, This is just How-Things-Really-Are...

For all dangerous minds, your own, or ours, but not the tv shows'... ... ... ... ... ... ... How to hack human consciousness, How to defend against human-hackers, and anything in between... ... ... ... ... ...this may be regarded as a sort of dialogue for peace and plenty for a hungry planet, with no one left behind, ever... ... ... ... please note: It may behoove you more to try to prove to yourselves how we may really be a time-traveler, than to try to disprove it... ... ... ... ... ... ...Enjoy!

Creativity✒📃😍✌

“Don’t try to be different. Just be Creative. To be creative is different enough.”

Political Joint

A political blog centralized on current events

zumpoems

Zumwalt Poems Online

dhamma footsteps

postcards from the present moment