Archive

Archive for the ‘information’ Category

Draft on Information, Entropy, and Negentropy

July 26, 2022 Leave a comment

It is currently popular to equate entropy with information. I have written previously why I do not think this makes sense, citing thought experiments such as running a video of ice melting in reverse. The fact that the amount of bytes of the video does not change demonstrates several concepts:

  1. Thermodynamic entropy is not equal to information entropy.
  2. The video image of ice melting is not the chemical process of ice melting.
  3. Sense modalities matter.
  4. Qualia cannot be presumed to arise organically from physical facts.
  5. Simulation is a superficial aspect of overlapping sense modalities rather than a deep fact of nature.
  6. Information includes both the sign, the unsign, the noise, and the unnoise.
  7. The sign contains maximum information and the least entropy. The sign is detected, understood, and known to be detected and understood.
  8. The unsign is the property of a given quale to be disambiguated negatively. The letter A is definitely not a consonant. We could say there is a set of things that A definitely is not. I am calling that the unsign of A.
  9. The noise is that which is assumed to be outside of the scope of signs but not outside of scope of detection.
  10. The unnoise is that which is assumed to be outside of the scope of detection, or which can neither be assumed nor ruled out.
  11. Abstraction is built on sense deployed on the ability to sense that something else has been sensed.
  12. Entropy and negentropy are qualities related to detecting detection events in the most minimal and generic terms rather than to the qualities that are being sensed themselves.
  13. The minimal and generic terms of data, such as bits, are the terms of a TEST performed concretely using physical substances and their capacity to detect and change (sensory-motive) each other’s (detectable-by-some-sensory-motive-test) state.
  14. Information refers to the results of a set of sensory-motive/experiential tests.
  15. Entropy and negentropy refer to information and qualities of sensory-motive tests.

We mistakenly call these tests and test results read-writes. Mistaken because there is NO decoding of the aesthetic qualities represented through what is being tested/”read”, there is only a copying of parts of sequences of (what we perceive to be) test results from one physical “storage” site to another”. From the perspective of hardware (which has no perspective in reality), there can be no “code”. The hardware event is controllable precisely because it has no meaning or motive of its own that would significantly impact those properties we are testing for.

Because we could dream of different facts relating to video images and chemical processes, neither can be identical to visible and tactile-tangible experiences.

Critique of A Good Idea

July 18, 2022 Leave a comment

Here are my (unfortunately critical but well-intentioned) comments on “Electromagnetism’s Bridge Across the Explanatory Gap: How a Neuroscience/Physics Collaboration Delivers Explanation Into All Theories of Consciousness“, in response to some tweets.

I think that the paper does come up with good plans of action for experimentation, and I take no issue with those. I agree that we should make artificial neurons. I agree that we do experiments that will tease out the most primitive signs of electromagetism emerging from more fundamental physics, and I agree we should think of them as hints about how consciousness provides typical human modes of awareness. My disagreements are with the assumptions made in getting there.

I fully acknowledge that my disagreements are made from my own conjectures and I expect most audiences to consider those conjectures ‘crackpot’ ideas prior to even attempting to understand them fully. That doesn’t bother me in the slightest. My only hope is that there might be some trace left of my ideas on the internet in future decades that could help theorists close improve on or disprove my many hypotheses.

From the start, the issue of consciousness is framed in relation to both First Person qualities of experience and to the sense of their being “inside” of what is being observed as brain and body behaviors.

“Observational correlates are demonstrated to be intrinsically very unlikely to explain or lead to a fundamental principle underlying the strongly emergent 1st-person-perspective (1PP) invisibly stowed away inside them. “

I submit that this is already a rhetorically loaded framing that does not consider the possibility that the sense of privacy and interiority we commonly (but not always) experience is not any more fundamental than the sense of publicity and exteriority, even though those distinctions are widely reported to be transcended in certain states of consciousness.

Nobody has ever seen a first-person (1PP) experience ’emerge’ from a brain in any way. What we have observed is a correlation appearance between experiences with intangible or trans-tangible qualities and experiences of tangible appearances of changes in the brain.

I think that I should break that awkward sentence down further.

What we have observed (scientists, doctors, patients with brain injuries, etc)

is a correlation appearance (meaning we see a brain doing something and we hear reports of something else, but they appear to happen at the same time). There is no evidence of causation, no mechanism by which a brain activity transforms into another quality like color, flavor, or privacy. There is only a (veridical) appearance of temporal synchronization.

between experiences with intangible or trans-tangible qualities (I’m trying here to refer to the qualitative phenomena of nature that we tend to associate with and assume arise only within “1PP” privacy, but to explicitly avoid jumping to that logically unnecessary conclusion. I think the relevant thing about feelings, thoughts, flavors, etc is not that they are private but that they are NOT tangible. They are not touchable presences with geometric shape. They can be intangible concepts or phenomena that I call percepts (sensations, feelings, colors, etc) that are not completely intangible or conceptual but cannot be reduced non-destructively to geometric coordinates.)

and experiences of tangible appearances of changes in the brain. (I’m trying to emphasize here that regardless of how real and objective the brain appears, its appearance does depend on the modalities of sight and touch used to detect it. Those appearances cannot be said to be more fundamental than any other appearances that tend to appear to be ‘1PP’).

In consideration of that, I think that it is just as likely that the seemingly third person and seemingly first person qualities of experience ’emerge’, or perhaps better ‘diverge’ from a larger holarchy of conscious experience.

“The brain’s specialized complexity in EM field expression distinguishes it from other organs (such as the liver and the heart) that are also EM field entities from the atomic level up. The consequence is that there is only one natural, fundamental physics correlate of P-Consciousness: EM fields as “electromagnetic correlates of consciousness””

This seems to contradict itself. It is saying that it is the complexity of EM that makes the brain more special than, say the EM object that is the large intestine, or the body as a whole…but then the assertion points to EM fields rather than the specific property of complexity as being correlates of consciousness. I point to single-celled organisms that seem to be no less conscious than human bodies do, but which have no neurons. As the paper goes on to say:

 “…for all practical purposes in the science of P-Consciousness, we are electromagnetic field objects in our entirety. As is a car, a computer, lunch, a pile of dirt, a tree, your dog, steam, and the air we breathe.” 

So which is it? If everything is electromagnetism then is everything conscious to some degree (what I call promiscuous panpsychism)?, or is the brain conscious because it is so electromagnetically complex? If the latter then the EM aspect seems all but irrelevant.

“For example, atoms form molecules and they jointly aggregate to form cellular organelles. These jointly form whole cells, and so forth.”

Here again, the position of smallism is assumed and the top-down influence is disqualified without consideration. In reality, when we observe how organisms reproduce, they divide as whole cells. We can infer that the first cells were the result of molecules accidentally persisting in more complex collections that would lead to lipid membranes and prokaryotes but our efforts to synthesize cells from ‘scratch’ have thus far been somewhat suspiciously unsuccessful. Our inferences of small-to-large evolution by natural selection may be a huge mistake.

We have not even attempted to factor in the lensing effect of the bubble of our own perceptual limits, and its role in perpetuating an anthropocentric worldview. We have not attempted to estimate the possible consequences to our thinking of a universe in terms that assume our apprehension of human consciousness as the apex form of awareness or sole form of super-awareness. We have not factored in the possibility of timescale relativity and taken five minutes to imagine how much more conscious something like the atmosphere of a planet would assume if we viewed centuries of it in time lapse equivalent to an nMRI video.

In reality, the evolution of forms may proceed not from small to large and young to old, but may at the very minimum, progress from both top and bottom, past and “future”. We may be living in a Natural Containment Hierarchy that is not merely scaled by physical sizes of bodies, but by lensings of perceived causality, aka ‘time’. I have made some efforts to diagram this:

We should not assume that our typical, 21st century, Western conditioned, mid-life, waking consciousness is the universal authority on the ontology of time/causality. The smallest and largest scales of the hierarchy/holarchy may be more unified with each other than with the holons at the center of the hierarchy.

Our willingness to ignore our self-centering view of the containment hierarchy seems to suggest to me that the possibility of an intrinsic lensing property in the way that conscious experiences are diffracted from the totality. The sense of being in the center of the containment hierarchy may be like other types of relativistic frames of reference rather than an objective reflection of the cosmos as it is without our lensing of it, and of ourselves.

I propose that the anthropocentric positioning of ourselves in the containment hierarchy should be considered as a superposition of *both* the self-centered and the self-negating perspectives. In other words, we see ourselves and our lives as midway between Planck scales and cosmological scales both because it is actually true, and because it must always seem true.

By analogy, we find that both the geocentric/flat Earth perspective and the heliocentric round Earth perspective are equally significant to understanding human history, but neither could be predicted as emerging from the other. In the same kind of way, the uncanny similarity in the apparent size of the solar disc and lunar disk in the sky, combined with the happenstance of Earth having only one such natural satellite, makes for a rather fine-tuned condition that made millennia of religious worldviews possible and dominant still for some even in the face of the obvious evidence of the post-Copernican perspective.

What I see is a universe where such fine-tuned superpositions are themselves fine-tuned superpositions in between coincidence and teleology. The coincidences are both coincidental and more than coincidental, and picking one perspective or the other can seem to have cascading ‘choose your own adventure’ or ambiguous image flip consequences. The universe seems to support delusions and solipsism for an unreasonable number of people for an unreasonable amount of time. In my understanding, this property of the universe and consciousness is profoundly important, although that estimation of significance is itself tantamount to choosing the teleological-aesthetic (solipsistic at the extreme) side of the superposition of the absolute over the mechanistic-coincidental (“nilipsistic” at the extreme) side.

“If you deleted (in the sense of “de-organized”) any layer below M, for example, the entire hierarchy disappears from that layer upwards. For example, deleting all atomic particles deletes atoms, molecules, cells, and so forth, all the way to the containing environment. In these cases, none of the deletions eliminate the lower levels, including sub-atomic particles, space, and so forth. This fact reveals the existence of a powerful vertically acting system of constraints that is not within the ambit of any individual scientific discipline.”

Not necessarily. By analogy, if we deleted all characters used in written language, and all phonemes used in verbal language, that does not mean that all human thought and communication would be deleted. All that would happen is that humans would immediately begin inventing new language using those same two sense modalities or other sense modalities if they were also deleted. In our theories, I think that we should not be blinded by the bias known as “smallism” and “big” cosmopsychic theories should be considered equally viable.

“Contemporary philosophers tend to assume that fundamental things exist at the micro-level. Coleman (2006) calls this “smallism”: the view that facts about big things are grounded in facts about little things, e.g., the table exists and is the way it is because the particles making it up are related in certain extremely complicated ways. However, the work of Jonathan Schaffer (2010) has brought to prominence an alternative picture of reality. According to the view Schaffer calls “priority monism”, facts about little things are grounded in facts about big things. The table’s atoms exist and are the way they are because the table exists and is the way it is; and all things ultimately exist and are the way they are because of certain facts about the universe as a whole. For the priority monist there is one and only one fundamental thing: the universe.

If we combine priority monism with constitutive panpsychism we get:
Constitutive cosmopsychism—The view that all facts are grounded in/realized by/constituted of consciousness-involving facts at the cosmic-level.

We can also envisage non-constitutive forms of cosmopsychism. On a standard form of layered emergentism (discussed above), human and animal minds are causally dependent on consciousness-involving micro-level facts whilst being fundamental entities in their own right; on the cosmopsychist analogue, human and animal minds are causally dependent on the conscious cosmos whilst being fundamental entities in their own right.”

https://plato.stanford.edu/entries/panpsychism/

continuing…

“Layer [M+1] is where the EM field system impressed on space by brain tissue acquires its fully detailed form, including all properties inherited by the constraints, drives, and properties of the deeper layers”

Here I propose that EM fields may not in fact be ‘impressed on space’ at all, and are not even ‘fields’ in an ontological sense. My understanding suggests that electromagnetic activity is irreducibly sensorimotive, and that the inference of fields is based on early methods of detection, measurement, and logical deduction which have become obsolete with the advent of Relativity, Quantum Mechanics, and familiarity with psi and other exotic states of consciousness.

The universe may be a conscious experience ‘all the way down’ and all the way up, with experiences on any given timescale lensing experiences on distant timescales in objectivized (“nilipsistic“) terms (as fundamental forces, mathematical logic, and tangible topologies, for example). Having read some of Maxwell and Faraday’s original papers defining EM in terms of fields, I am struck with the distinct impression that the conclusions they made would not have been that way if they had access to QM observations like entanglement and contextuality. I think that the field metaphor was a 19th century heuristic that continues to be indispensable, but not because it is an ontological fact. 

We now note that the transition from strong to weak emergence is a fundamental feature of the process that science experienced when deconstructing the natural biosphere into the layered descriptions shown in Figure 2B. In Figure 2B this process has been labeled as “reduction.” Before the science was completed, every progression in scientific understanding started as a mystery: a question unanswered. Molecules were mysteriously related to atoms. Atoms were mysteriously emergent from what turned out to be their subatomic constituents. Higher up, we find the mystery of the strongly emergent flight of bumblebees, which turned out to be a weakly-emergent property of turbulence. 

I see this as a popular, but nonetheless dangerous and seductive fallacy. It may be true that the history of science can be seen to have repeatedly corralled seemingly strongly emergent phenomena and tamed them into weakly emergent complications, this cannot be presumed to extend from the tangible to the intangible or trans-tangible under that same logic.

This is due to the fact that atoms, molecules, bumblebee bodies, and turbulence are *uncontroversially tangible*. There was never any question but that these phenomena are observed as tangible forms moving in public space. There is in all cases an infinitely wide explanatory gap between all such tangible objects and any such intangible or trans-tangible phenomena as sensations, feelings, perceptions, awareness, colors, flavors, sounds, ideas, symbols, references, interpretations, themes, archetypes, caring, valuing, and on and on.

No amount of moving particles can ‘add up’ to anything other than other groupings or shapes of moving particles without appealing to strong emergence or promissory materialism. There is no comparable problem with particles adding up to shapes such as molecules, surfaces, cells, bodies, planets, etc. They are all 3d topological presences that can be comfortably assigned causal closure that is limited to other 3d topological phenomena (forces, fields, laws of geometry). Things like forces and fields, while superficially ‘intangible’ (and therefore must be imagined to be somehow “imprinted” on the vacuum of space or inevitable consequences of statistics on cosmological constants or standard model, etc) are nonetheless exhaustively describable in tangible terms. They are spatial regions within which some effect is observed to occur.

This kind of in silico empirical approach is simply missing from the science. No instances of in silico-equivalent EM field replication can be found. Artificial neurons created this way could help in understanding EM field expression by excitable cell tissue.

I agree with this. In order to proceed with understanding the Easy Problem of our own neurology, we should be creating artificial neurons.

As the science has unfolded, a single, dominant and promising theory of this kind has emerged. It is the “Information Integration Theory (IIT) of Consciousness” by Tononi (20042008)Balduzzi and Tononi (2008)Oizumi et al. (2014), and Tononi et al. (2016).

That may be true in the sense that there has not been a single competing theory that has been discussed as much in the media coverage of academic discussion in recent years, but I have not encountered many who see IIT as especially promising in reality. At best, some future descendant of IIT might provide some useful indications for determining whether someone is likely to come out of a coma or something, but even that utility may be completely misguided. There are many good critiques of IIT that can be found online:

“In summary, IIT fails to consistently assign consciousness to a system, because the definition is based on hypotheticals, which by definition are not instantiated by the system. Deep down, the troubles arise from the circularity of the definition of information as reduction of uncertainty. Uncertainty refers to a prior state of knowledge, but the notion of knowledge for the subject is never defined. In practice, the knowledge underlying the reduction of uncertainty is the knowledge of the observer who formalizes scenarios and quantifies probabilities of events that the system itself has never lived.”

 http://romainbrette.fr/notes-on-consciousness-ix-why-integrated-information-theory-fails

continuing…

From a C1 perspective, this position is rather hard to understand, because C1 tells us there is only one substrate that we know delivers P-Consciousness: EM fields organized in the form of a brain made of atoms.

By this reasoning, only our own personal brain is known to deliver P-Consciousness also. Because we know from our own conscious experience how limited our empathy and theory of mind can be even for members of our own species, there is no reason to assume that P-Consciousness has any more connection to humans or brains or electromagnetism than it does to ‘complexity’ in general, or to biology, or to certain scales of material accumulation.

I see these assertions of brains as critical to understanding consciousness as based on uncritical anthropocentrism. I expect that our own brain is especially suited to our own kind of conscious experience, but really the brain of any species would seem equally appropriate if we did not have the human brain as an example. The intestines or the immune system, cell nucleus, cytoskeleton, nucleic acids, and many other complicated structures and processes would seem equally hospitable.

GRT focuses on the Oscillatory Correlates of Consciousness (OCC), where the particular “oscillations” most relevant to P-Consciousness are those arising from the brain’s endogenous EM field system 

What we need to know though is what is doing the ‘correlating’? There might be all kinds of correlates of consciousness we can find – maybe high dimensional analysis of gross physiological indicators like skin resistance and blood pressure could be used to plot out some correlation too. Good stuff for the Easy Problem and medicine, but does nothing for the Hard Problem or disproving cosmopsychism.

The abovementioned EM account offered by JohnJoe McFadden is the wave-mechanical approach in his “Conscious Electromagnetic Information” (CEMI) field theory (McFadden, 2002a,b2006200720132020). “I therefore examine the proposition that the brain’s EM field is consciousness and that information held in distributed neurons is integrated into a single conscious EM field: the CEMI field” (McFadden, 2002a).

We have the same interaction problem here, with the theory that information can be somehow ‘held’ in physical topologies we call neurons begs the question of physicalism. As far as I can see, all physical effects can be explained as statistically inevitable recombinatory variations on geometric *formations* and require no such things as information, signals, signs, etc to do what they appear to do. The correlation is smuggled in retrospectively from conscious experience rather than arrived at prospectively from physics.

Our proposition is that the standard model’s scope of scientific deliverables, and the scientific behavior that produces them, is to be expanded to include (ii). We now know that EM field, as depicted by the particular (i) 3PP “laws of appearances”

Sure, I agree with that and have proposed the same kind of thing. EM should be understood to be a single Sensory-Motive-Electro-Magnetic phenomenon. That isn’t the whole story though, but it’s an important start. I have tried to diagram it early on in my Multisense Realism efforts:

let us assume that (ii) involves abstractions describing a universe made of a large collection of a single kind of primitive structural element, say X. This “X” could be perhaps regarded as an “event” or “information mote” or “energy quantum” or all these simultaneously. Its true identity is not our job to specify here. 

A seemingly pragmatic approach, but unfortunately I think that there is no way to work from X without understanding what X is in this case. I think that it is our primary job to specify it. In my view, I propose X as a scale-independent (equally micro-unit as cosmo-unity) holos of nested/diffracted aesthetic-participatory (sensory-motive) phenomena. I have elaborate diagrams and explications of how that goes.

The solution to the hard problem, we suggest, has been hard because it must be discovered (not invented) in a completely different realm of descriptions of nature of kind (ii). In effect, the very meaning of what it is that a scientist does to explain nature has itself had to change.

What scientific evidence do we have that it is possible or practical to describe the natural world U in (ii) form? When we look for it, we easily find that we have already been doing it (X descriptions) for decades, but in physics and outside the science of consciousness. They are familiar to all of us. Some examples: X = “string theory” e.g. (Sen, 1998), “loops” e.g. (Rovelli, 2006), “branes” e.g. (Ne’eman and Eizenberg, 1995), “dynamic hierarchies of structured noise” e.g. (Cahill and Klinger, 19982000Cahill, 20032005), “cellular automata” e.g. (Mitchell et al., 1994Hordijk et al., 1996Wolfram, 2002), and “quantum froth” e.g. (Swarup, 2006

This is hard to parse for me. Is it saying that things like branes, strings, loops, etc can just be considered identical to conscious experiences? 

The moment a (ii) collection of abstracted X can be found to express EM fields as an emergent behavior of the collection, the physicists involved, by directly comparing the (i) and (ii) depictions of the same nature, would then be able to see, within (ii), that part of the underlying structure of (i) that may be responsible for the 1PP. 

That sounds like a perfectly reasonable approach to me, as far as identifying some crucially important features of the origins of our own experience as human individuals, but I still see it as an Easy Problem path that assumes

1) consciousness = “1PP” and

2) 1PP is not closer the underlying phenomenon from which X arises than anything else we could imagine.

 It is based on the empirical fact that it is EM fields that ultimately deliver P-Consciousness.

I see this as a problem. First of all, the statement that EM fields deliver P-Consciousness is NOT an empirical fact. It could just as easily be the case that EM fields are P-Consciousness appearances of the nesting of P-Consciousness on particular timescales. Secondly, the paper has already committed to the *complexity* of the EM field complex-that-appears-to-itself-as-a-brain being more important than the ubiquitous presence of EM as every-appearance-in-the-universe.

The correlates of P-Consciousness paradigm must ultimately face the fundamental physics of EM fields if a fully explanatory account of P-Consciousness is to be constructed. 

That is an assumption also. A theory based on assuming smallism and anthropocentric identifications with consciousness. I am encouraged by the intentions and directions that are behind the paper, but I see it as still a step before Step One, and that in many ways, the true Step One can be arrived at by considering the diametric opposite of many of the ideas above that are assumed to be true.

A Sound by any Other Name

January 9, 2022 Leave a comment

What is the difference between thinking that consciousness requires a living body and thinking that sounds have to be made by acoustic instruments?

It seems like the same common sense intuition, and I think in both cases, it happens to be false. From audio recording we learned that we did not need to have someone play a horn to hear a horn sound. We could actually use the sound that a needle makes when scratching over a grooved surface to make a nearly identical sound, as long as the grooves matched the grooves made when the horn was played in the first place.


As audio technology progressed, people discovered that purely electronic changes in semiconductors could be used to drive speakers to drive eardrums. We didn’t need to begin with a horn being played, or acoustic vibrations to propagate from brass to air to a steel needle to a cooling disc of resin. All we needed was electronic switches to rapidly change the flow of current through a speaker in the same pattern that the needle used to make going up and down in the groove. The up and down analog became digital stop and go, all the way up to the point where you have to jiggle people’s eardrums. That could not be done electronically but required a membrane to mechanically push air into the ears.


It seems now that we are getting closer to cutting out the acoustic middleman entirely with the possibility of Neuralink type technology and broadcast music directly into your brain without any physical sound at all. No speakers, no ears…but you still need something that senses something, and you need something that senses that something as sound. Even if we play music and record our dreams electronically, it still doesn’t solve the Hard Problem of consciousness. There remains an explanatory gap between the silent operation of electrical current and the experience of sounds, sights, feelings, thoughts, and the entire material universe of objects…including brains and electronic instruments.

That last sentence is the tricky part that physicalist thinkers can’t seem to stop overlooking. Yes, the entire physical universe that you know, that you read about, that scientists experiment on, can only exist under physicalism as a ‘model’ or ‘simulation’ that simply, um, ’emerges’ from either electromagnetism itself, or electromagnetism in various brain structures, or from the ‘information’ that we imagine is being communicated by any or all of these processes.


Of course, it’s all circular. To say ‘the brain’ is to say ‘my qualitative and cognitive experiences that I call ‘brain”. To say ‘the world’ is to say ‘my’ or ‘our’ qualitative experiences that seem like a world. There is no getting around this. The last mile of any cosmological theory always has to cash out in some experience-of-a-cosmos, with or without a theory of a cosmos-outside-of-some-experience. Noumena are optional and hypothetical. Experiential phenomena, as Decscartes almost said, cannot be deined.


I argue with a lot of people about information and qualia, because it is glaringly obvious to me that this technology based idea of information conflates the purely intellectual and abstract process of learning or communicating with the concrete aesthetics of what it is being communicated. Information or simulation theory says nothing about what is ultimately doing the communicating and what literally happens when a communication is decoded, from the billions of quantifiable microphysical stop/go events in an engineered device or neurological organ to unquantifiable and irreducibly aesthetic sight/sound/objects/feelings/thoughts.


The idea of simulation only pushes the explanatory gap down further in scale, but it is the same gap. It’s not enough that a change in a computing device or brain coincides with a change in direct experience, we have to ask what is doing the correlation in the first place, and how, and why. It’s not just “what breathes fire into the equations?”, but what the hell is fire doing in equations in the first place? Why wouldn’t it make sense to ask what breathes equations into every form of ‘fire’? What could it be other than conscious experience itself? Anything we try to put in between conscious experience and nature always has that same last-mile problem. In the end, you need something eternal that can make sense – some capacity not only to run programs on hardware to manipulate hardware but for either programs or hardware to exist as something aesthetic rather than just invisible facts in an arithmetic void.

Grelling–Nelson paradox

December 29, 2020 1 comment

This is to record a train of thought that began with my posting: “The Map is Not the Territory” is, by its own logic, only a map that may be more or less true than the statement. There may be no maps at all, only different sense phenomena accessing each other in different ways.”

A comment:

And my repsonse…

Thanks, I hadn’t heard of that before. It fits in with a line of thinking that I woke up with. Thinking about mirrors. Do mirrors exist or are there only objects like silvered glass that can be used to reflect something else? Do reflections exist or are they just separate instances of physical light? This leads to a sidebar about the difference between tactile sense and visible sense, and how images transcend the classical limitations of tangibility and can be in more than one place at the same time, weightlessly.

Thinking about looking at the Sun in a mirror and how it is still potentially blinding, and how that illustrates that physical reflection has a tangible component as well as a visible one. A flashlight uses a reflective/mirrored dome to deflect the radiation of the electrified bulb toward the center of the ‘beam’. Mainly though, I’m interested in pointing out how our current/legacy scientific worldview overlooks and disqualifies sense modality.

Physicalism compulsively seeks to reduce the intangible to equations describing the tangible in intangible terms that are in some sense isomorphic or autological to them but in another sense perfectly anti-morphic or heterological to them. If there is one thing that all abstract formula have in common it is that they are the antithesis of all of the concrete substances that can be touched and held. Yet despite the purity of that opposition, some essential unity can be translated intellectually between one and the other

Part of what I try to do with Multisense Realism is to break free of the mania for reduction to maximum tangibility (by means of maximally intangible language) and look instead at the more obvious co-existence of a wide spectrum of multiple aesthetic modalities of sense and sensemaking that seamlessly *and* ‘seamfully’ unite tangible concrete physical objects and intangible abstract ideal concepts under a larger umbrella of trans-tangible aesthetic-participatory sensory-motive percepts*.

So, the Grelling-Nelson paradox is to me another angle on this…another reflection on how our legacy of rigid logical labeling takes language too literally (that is, too concretely) and mistakes words for meanings and meanings for words. When we find these Easter Eggs of contradiction or paradox we are almost surprised that language doesn’t behave like other things in the world. We are offended that language seems to break its own seeming promise to faithfully reflect anything that we aim it at. Of course, this is a mistaken assumption. Language makes no such promise. Semantic paradoxes expose the fact that language itself is syntactic, not semantic, and that at some level we are the conscious agents who are using our power to symbolize and associate metaphorically words into meanings and meanings into words. The words themselves aren’t doing it. The map is its own territory, both faithfully and unfaithfully translating sense and sense making modalities through another sense making modality which is especially designed or conditioned to send and receive but to remain relatively unsent and unreceived themselves.

*seemingly



Our Universe of Nested Contrast and Criticality

April 4, 2020 Leave a comment

From my blog

Life is almost infinitely cruel and almost infinitely generous.
The night sky presents a view of the universe which is defined by two striking extremities. There is not merely dark and light, or yin and yang, but twinkling brilliance scattered across a pitiless black void. The stars are spread out in a way that presents another set of qualitative extremes – pattern and patternlessness.

The condition of being poised precisely between order and chaos is sometimes known as criticality. It is also, uncoincidentally, the way that brain activity looks to us from the outside. When we look at the world, the degree to which it makes sense is calibrated by our own sensitivity to patterns. That sensitivity, in turn, changes dynamically according to our physiological states and our psychological participation. We can look at the night sky we can see patterns that other people can see also, if they look at them in the same or similar way. Astrology is an example of practices that explore this shared criticality of perception and participation. To see constellations in the stars, and to see in their shapes reflections of your own shared experience and culture is not only the beginning of astrology and astronomy, it is the beginning of religion and science as collective activities around which civilization has been built.

All formal attempts at divination seem to exploit the criticality between discovery and invention. People use cards, coins, tea leaves, dice, etc as a way to access the general probability stream of the moment, and then to intentionally interrupt that stream and freeze it. It is a way of teasing uncoincidence out of coincidence.

When we look at the sky, do we choose to see a ‘bright blessed day’ and the ‘dark sacred night’, or do we prefer to see through such fanciful illusions to a starker, but possibly more accurate truth? Feeling and seeing have their own polarity in thinking and knowing. Using the concept of anthropcentrism, we can reframe the appearance of contrasting qualities as inevitable rather than miraculous. We can look at all dualities as simply the natural consequence of detection methods being employed by body systems. Hot and cold are sensations that signal some proximity to the upper and lower bounds of our body’s biochemical-thermodynamic range. As we acclimate to artificially controlled environments, our body adapts these signals to straddle a much narrower range of ‘comfort’. Even a single degree of deviation in temperature can become uncomfortable for one person, but comfortable to another, even as they live in the same house and relax on the same couch.

Should I try to tie these themes together, and sum them up in some kind of clear point, or is it better to let this stand in criticality? Are questions better than answers? Are meta-questions invitations to criticality or meta-criticalities?

Can Qualia Be Simulated?

January 19, 2019 4 comments

My response to this Quora question:

The Integrated Information Theory claims, that a computer simulation of a brain would produce the same behaviour, but wouldn’t have any qualia. If qualia don’t make any difference, does it mean, they don’t exist? Is it a contradiction?

There are several considerations upon which the answer to this question hinges:

  • The nature of simulation and behavior.
    1. The term simulation is an informal one. I don’t place a high value on discussing the definition of words, but I think that it is essential that if we are talking about something that exists in the world, we have to understand what that thing is supposed to be. I would say that the contemporary sense of ‘simulation’ goes back to early applications of computer software, specifically Flight Simulator programs. We have since become accustomed to using video ‘simulations’ of everything from fighting on a battlefield to performing surgery. Does it make sense to ask whether a flight simulator is producing the same behavior as an airplane? If it did, would we say that the program had produced a flight from Rome to New York? If the flight simulator crashed, would we have to have a funeral for the simulated passengers? I would say no. Common sense would tell us that the simulation is just software…the airplane isn’t real. This takes us to the next consideration, what is real?
    2. The term real is an informal one as well. We talk about ‘reality’ but that can refer to some abstract truth that we seem to agree on or to a concrete world that we seem to share. To understand why there may be an important difference between a simulation and the ‘real thing’ that is being simulated, we should approach it in a more rigorous way. Flying a real airplane involves tons of physical matter, as well as countless causal links to the world/universe. The real airplane is the result of billions of years of accumulated change in the physical universe, as well as the evolution of numerous species and societies to engineer flight. There is a common comparison of the flight of an airplane to the flight of a bird or insect, where we are meant to think of both types of physical acts as ‘flying’, even though that flight is accomplished in quite different ways. I think that this comparison, however, is misleading. I would look to the famous quote by Alfred Korzybski, “The map is not the territory” instead when relating to simulating consciousness. Whether it is a literal geographical map or some other piece of graphic ‘art’ that ‘maps’ to a potentially real (in the concrete, worldly sense) place, the idea is that just because something appears visually similar to us does not mean that there is any other deep connection between the two. I’m not a photograph of my face. I’m not even a video of myself talking. This understanding is also expressed in the famous Magritte painting “The Treachery of Images”.
  • The nature of qualia.
    • Properly understood, what the term ‘qualia’ refers to exists by definition. It can get a little mystical if we rely on descriptions of qualia such as “what X is like” or “what it is like to feel X”, so I think it adds clarity if we look at it this way: Qualia is what is experienced. Information is a concept. Matter is a concept. Concepts are experienced also, but what the concept of matter refers to should/must be divided into the idea of matter as defined by the Standard Model (which has to do with exotic elementary “particles/waves” such as bosons and fermions which make up slightly less exotic atoms). Physical matter is made of atoms on the periodic table.
    • What we experience directly is not physical matter. What we experience are aesthetic presentations with tactile/tangible qualities such as shape, position, weight, texture, etc. We can dream of worlds filled with tangible objects, and we can interact with them as if they were physical matter, but these dream objects are not composed of the elements on the periodic table. The question of whether these objects are real depends on whether we are able to wake up from the dream. If we do not ever awaken from a dream, I don’t see any way of evaluating the realism of the contents of the dream. To the contrary, when we do awaken from a dream, we are often puzzled by our acceptance of dream conditions which seem clearly absurd and impossible.
    • That fact is very important in my view, as it tells us that either it is impossible to ever know whether anything we are experiencing is real, or it tells us that if we can know reality when we truly experience it, then experience must be anchored to reality in way that is deeper than the contents of what is experienced. In other words, if I can’t tell that I’m dreaming when the pink elephant offers me a cigarette, and if I can have dreams which include false awakenings, then I can’t logically ever know that I’m not dreaming. If, however, actual awakening is unmistakable as it seems, then there must be some capacity of our consciousness to know reality that extends beyond any sort of empirical symptom or logical deduction.
    • Qualia then, refers to the inarguably real experience of the color red, regardless of whether that experience is associated with the excitation of physical matter producing visible-wavelength electromagnetism in our physical eyeballs, or whether that experience is purely in our imagination. If we want to say that even imagination is surely the product of physical activity in the brain, we can make that assumption of physicalism, but now we have two completely different sources of ‘red’. They are so mechanically different, and the conversion of either one of the sources into ‘experienced red’ is so poorly understood, that all that physicalism can offer is that somehow there must be some mathematical similarity between the visible EM in the eyeball and the invisible neurochemistry scattered in many different regions of the brain which will eventually account for their apparent unity. We do not seem to be able to define a difference between red that is seen in a dream and red that is seen through our eyes, and we also are not able to define how either a brain or photon produces that quale of experienced red. The hard problem of consciousness is to imagine a reason why any such thing as experienced red exists at all, when all physical evidence points only to biochemical changes which are not red.
  • The nature of information, physical matter, and qualia.
    • Now that we have separated qualia (aesthetic-participatory presentations) from matter (scientific concept of concrete structures in public space), we can move on to understanding information. This is a very controversial subject, made more controversial by the fact that many people do not think it is controversial. There is a popular view that information is physically real, and will cite factual relationships with concepts of physical theory such as entropy. To make it more confusing, there is a separate concept of information entropy, based on the work of engineers like Claude Shannon who studied communication. Depending on how you look at it, information entropy and thermodynamic entropy can be equivalent or opposite.
    • In any case, the concept of entropy seems to blur together the behavior of physical structures and the perception of groups of structures and appearances into ‘systems’. This whole area is like intellectual quicksand, and getting ourselves out of it requires a very disciplined effort to separate different levels of sensation, perception, ‘figuration’ or identification, attention, and understanding. Because of my experience of having learned to read English as a child, I no longer have access to the raw sensation or perception level of English writing. I can’t look at these shapes on my screen and not see Latin characters and English words. Even upside down, I am still ‘informed’ by the training of my perception to read English. This would not be the case for someone who had never read English, however most adults on Earth would be able to identify the look of them as words in the English language, even though they can’t read or pronounce them. Anyone who does read English could at least try to phonetically sound out other European languages, but they may not be able to even attempt that for other languages that don’t use the Latin alphabet.
    • All of this to say that there may be no such thing as information ‘out there’. The degree to which we are ‘informed’ is limited by our capacities for both sensing and making sense. There may be no such thing as a ‘pattern’ which is separate from a conscious experience in which an aesthetic presentation is recognized as a pattern. This was a heavy revelation for me, and one which transformed my view of nature from an essentially computationalist/physicalist framework based on pattern to one based on an aesthetic-participatory framework in which nature is made of a kind of universal ‘qualia’.
    • If my view is on the right track, information does not produce qualia at all, rather information is one minimalist presentation of qualia which is perceived as having a quality of potentially ‘re-presenting’ another conscious experience. This too is a major revelation, since if true, it means that machines like computers don’t actually compute. They don’t actually input, output, or store numbers, they just serve as a physical mechanism which we use to modify our own conscious experience in a very precisely controlled way. If we unplug our monitors, nothing changes as far as the computer is concerned. If we are playing a game, the computer will continue to execute the program in total darkness. We could even plug in some kind of audio device instead of a video screen and now hear a cacophony of noises that doesn’t resemble a game at all. The information is the same from the computer’s point of view, but the change in the aesthetic presentation has made that information inaccessible to us. My hypothesis then is that perceptual access precedes information. If information is a “difference that makes a difference” then perception is the “afferent” phenomena which have to be available for an “efferent” act of comparison and recognition as “different”.
  • The assumption of emergent properties.
    • The idea that the integration of information produces qualia such as sights, sounds, and feelings depends on the idea of emergence. This idea is, in turn, is based our correlation between our conscious experience and the behavior of a brain. We have to be convinced that our conscious experience is generated by the physical matter of the brain. This alone provides us with the need to resort to a strong emergence theory of consciousness simply being a thing that brains do, or that biology does, or that complex, information integrating physical structures of any sort do (as in IIT).
    • Balanced against that is the increasing number of anomalies that suggest that the brain, while clearly having a role in how human and animal consciousness is made available, may not be a generator of consciousness. It may be the case that our particular sort of consciousness has conditioned us to prioritize the tangible, visible aspects of our experience as being the most real, but there is no logical, objective reason to assume that is true. It may be that physics and information ‘emerge’ from the way a complex conscious experience interacts with other concurrent experiences on vastly different scales. Trying to build a simulation of a brain and expecting a personal conscious experience to emerge from it may be as misguided as building a special boat to try to sail down an impossible canal in an Escher drawing.

 

Antonin Tuynman: From Information Theory to a Theory of Everything

December 7, 2018 1 comment

An excellent presentation from Antonin Tuynman. I think that this view is on the right track. Here are my comments, including a proposition for a new interpretation of physical theory.

Can anything exist without informational content?

Yes, I think that it can and does. When an infant sees colors, for example, there need not be any informative message that is made available by color. The color itself is presented directly, and only after psychological association does it acquire externally informative content. Blue must be presented as a visible ‘sight’ before it can be used as a label to inform us about something else.  We could try to say that color informs us of the wavelength of relevant electromagnetic states of our environment, but such data could be more plausibly attributed to (colorless) changes in the physiology of the nervous system.

If we say that something contains information, we are assuming a default capacity for receiving and processing information, and then conflating that with a default capacity for things to project information. This may not be how it works. Information, messages, codes, etc may not be entities at the ontological level, they may just be formalized instances of communication between conscious participants. Our consciousness can be informed by anything, but that doesn’t mean that any such thing as information exists independently of the change in conscious experience. In the same way, I suggest below that perhaps matter can be ‘illuminated’ without any purely physical photons radiating across empty space.

At some point, the video discusses information as relying on features that ‘stand out’. In my view,  if we want to completely understand information, we should be careful to acknowledge the role that perception plays in rendering what does and does not appear to stand out. Standing out is a function of how aesthetic presentations appear. To a trained musician hearing a song being covered by an artist, a ‘wrong chord’ might stand out, but to everyone else, they may notice nothing consciously. We should not assume any such thing as standing out without some modality to detect and care about detecting. I think that before difference can exist, sensitivity, or what I call “afference” must exist. For information to exist, there must be some phenomenal state that is ‘informed’…an experience that changes itself, and includes a capacity to notice those changes and then to lear from it. We shouldn’t assume aesthetic qualities like ‘homogeneous’ as objective properties unless we know that the degree and mode of sensitivity employed does not play a s central role in defining such qualities. It may not be possible to know that, and further, it may be that the only “is” or “being” is sense or seeming.

23:12 – Discussion about all subatomic particles having wavelengths, amplitude etc.. making them actually numerical/informational entities.

To this, I say, not necessarily. It may be that numerical appearances of physical structures are presented to our instruments because those instruments only extend those senses that relate to the body, particularly touch. It may not be nature that is quantifying physics, but the sense of tangibility being relied on with our technology and analysis which limits discovery to quantifiable appearances. Our way of experimenting and interpreting quantum may be like counting colors of a rainbow on our fingers, and then projecting the finger’s tangible properties as revealing of the deep nature of rainbows, when in fact the rainbow is not limited by those properties.

I like the Ouroboros example and mention of panpsychism very much. The part about self-awareness as being like the snake biting its tail rings true, however, like the finger and the rainbow, it may lead us to some assumptions that we don’t have to make. In my view, rather than self-awareness being a loop that is positively constructed against a background of nothingness, I suggest the opposite. If the default state of ‘existence’ is awareness, then the circuit of self-awareness does not begin with a circuit turned on, but instead begins with a kind of ‘dark current‘ circuit of snake-hood turning off. Loops only become necessary *after* a dissociation/division occurs.

In other words, information is only ever a local re-connection with a more complete, less-local experience. Information is not added on to a vacuum to make consciousness, rather consciousness is divided by degrees of relative unconscious or vacuum-like appearances. These disconnections or divisions in experience would be the initial cause of all formations, which then can be re-membered on another level of sense-making experience as ‘information’. This view might be considered to go beyond panpsychism or cosmopsychism in that the universe is not a ‘thing-that-is-conscious’, but a conscious experience that is ‘thinging’ by dividing and re-unifying parts of itself. Thingness/objectivity and sensor-hood/subjectivty become emergent (really divergent) artifacts of diffraction of experience. It’s not that particles sense they are being looked at, rather there are no particles ‘out there’, only a particularizing method of perception and interpretation that we are employing. Information arises from a juxtaposition of conscious experiences that reconnect some aspect of experiences with each other. Matter is like ‘information squared’ experience that has been divided and re-connected in two opposite ways – as a hyper-connected (subjectivized, contextualized, temporalized, intangible) presentation, and as a hyper-disconnected (objectified, disentangled, spatialized, tangible) presentation (matter).

Extra credit: Re-interpreting subatomic physics

Very early on, at 3:19, the question of what fundamentally exists is brought up. It is mentioned that currently, we suppose that there are forms of existence which are more subtle than matter, such as electromagnetism. I agree that is the consensus, but I have a crazy conjecture that all physical phenomena that seem to be more subtle than matter may be better explained as dynamic sensory-motive modifications to matter’s definition. Instead of of a quantum mechanical reality beneath matter, I propose an aesthetic-participatory context, in which realism is qualified and quantified into different appearances. QM is only half of the story.

Even very efficient nuclear fusion is only thought to convert less than one half of one percent of its matter to energy. Over 99.5% of a nuclear explosion is just the energy released from changing the particular spatial configuration in which the atom’s nuclear particles happen to be bound. What is released mostly ‘binding energy’, but what would that realistically mean? How would moving particles away from each other result in an enormous appearance of ‘energy’?

E=mc² is a fact, however in this view, energy, mass, light, and spacetime may not be entities that are independent from matter. I think it is possible that they are behaviors of matter, or more specifically, they are the symptoms of how sense experiences spatiotemporalize themselves into objectified appearances of tangible, geometric structures, aka ‘matter’. The spatiotemporalizing (or disentangling-contextualizing) I suggest, would accomplished by modulation of aesthetically creative sensory-motive qualities which are cosmologically primitive and absolute.

In the view that I propose, nuclear particles can be thought of as analogous to groups of dancing musicians. When a group comes together or breaks apart, those musicians play very loud and fast music, which causes other groups of musicians to increase the volume and tempo of their own dancing and playing, which then sets up the nuclear chain reaction. Notice how the model of the atom has progressed from mechanical objects to a more ephemeral cloud. It doesn’t necessarily make sense just because we can get valid predictions out of it. The reality of atoms may be a more complicated story of pseudo-corpusculization via aesthetic modulation in the sense modality of tangibility. We may be counting rainbow colors on our fingers again.The-History-of-the-Atom-–-Theories-and-Models

To use another analogy, when we see a flag flapping in the wind, we understand that the flag is being passively pushed around by the wind that surrounds it. In a vacuum, gravity or acceleration would also passively cause changes in the shape of the flag. With electromagnetism though, there is no material medium…no wind moving anything around. Electromagnetic theory has developed into a way of believing a sort of intangible ‘wind’ that is made of probability. What I propose is completely different: It’s the flag itself that is acting and reacting to other flags directly. I think that recent scientific insights about perception may be leading in that direction. Our experience may not a ‘simulation’ in the brain, rather, physics is perception on the astrophysical scale.

Could our entire concept of electromagnetism as force-fields in physical space be misguided? Are we presuming a pseudo-material forcefield that pushes passive particles around, when the truth may be that the appearances of ‘particles’ or ‘waves’ are themselves reflections of the instruments and methods we are using to perceive?

My proposal is that EM radiation can be reduced conceptually to certain kinds of fundamental perceptual interactions. Under this theory, there would be no literal waves or particles of EM radiation in a vacuum – no photons or electrons traveling across the empty-ish space between atoms. Instead, it would be the atoms themselves which become more and less sensitive to each other’s states, or even better, the experiences behind the appearance of atoms which expand and contract in a kind of ‘stimulation space’ that is aesthetic/qualitative and concretely non-spatial.

I suspect that it is possible that we’ve gotten Quantum interpretation all wrong. Wheeler-Feynman absorber theory is one of the few interpretations that I think may have been on the right track. I would extend it by suggesting that the subatomic particle-waves may not literally be ’emitted’ or ‘absorbed’ across space, but rather they are more like sensations which rise to a contagious level of activity and then pass into a dormant phase. Certain changes in an atom’s properties can be shared directly, and those can become a channel for other re-connections to larger experiences to be shared also. Another way of saying it is that I am proposing that instead of defining the speed of light in terms of vacuum permeability and permittivity of magnetic and electric fields, light itself becomes the permeability and permittivity, or shareability of phenomenal stimulation. Just as sight allows us to touch something from a distance, so too might all light, sound, smell, emotion, etc represent a partial re-connection of phenomenal experiences which have been spatially disentangled and temporally contextualized to appear separate to each other.

 

Joscha Bach: We need to understand the nature of AI to understand who we are

November 20, 2018 1 comment

 

JBKD

This is a great, two hour interview between Joscha Bach and Nikola Danaylov (aka Socrates): https://www.singularityweblog.com/joscha-bach/

Below is a partial (and paraphrased) transcription of the first hour, interspersed with my comments. I intend to do the second hour soon.

00:00 – 10:00 Personal background & Introduction

Please watch or listen to the podcast as there is a lot that is omitted here. I’m focusing on only the parts of the conversation which are directly related to what I want to talk about.

6:08 Joscha Bach – Our null hypothesis from Western philosophy still seems to be supernatural beings, dualism, etc. This is why many reject AI as ridiculous and unlikely – not because they don’t see that we are biological computers and that the universe is probably mechanical (mechanical theory gives good predictions), but because deep down we still have the null hypothesis that the universe is somehow supernatural and we are the most supernatural things in it. Science has been pushing back, but in this area we have not accepted it yet.

6:56 Nikola Danaylov – Are we machines/algorithms?

JB – Organisms have algorithms and are definitely machines. An algorithm is a set of rules that can be probabilistic or deterministic, and make it possible to change representational states in order to compute a function. A machine is a system that can change states in non-random ways, and also revisit earlier states (stay in a particular state space, potentially making it a system). A system can be described by drawing a fence around its state space.

CW – We should keep in mind that computer science itself begins with a set of assumptions which are abstract and rational (representational ‘states’, ‘compute’, ‘function’) rather than concrete and empirical. What is required for a ‘state’ to exist? What is the minimum essential property that could allow states to be ‘represented’ as other states? How does presentation work in the first place? Can either presentation or representation exist without some super-physical capacity for sense and sense-making? I don’t think that it can.

This becomes important as we scale up from the elemental level to AI since if we have already assumed that an electrical charge or mechanical motion carries a capacity for sense and sense-making, we are committing the fallacy of begging the question if carry that assumption over to complex mechanical systems. If we don’t assume any sensing or sense-making on the elemental level, then we have the hard problem of consciousness…an explanatory gap between complex objects moving blindly in public space to aesthetically and semantically rendered phenomenal experiences.

I think that if we are going to meaningfully refer to ‘states’ as physical, then we should err on the conservative side and think only in terms of those uncontroversially physical properties such as location, size, shape, and motion. Even concepts such as charge, mass, force, and field can be reduced to variations in the way that objects or particles move.

Representation, however, is semiotic. It requires some kind of abstract conceptual link between two states (abstract/intangible or concrete/tangible) which is consciously used as a ‘sign’ or ‘signal’ to re-present the other. This conceptual link cannot be concrete or tangible. Physical structures can be linked to one another, but that link has to be physical, not representational. For one physical shape or substance to influence another they have to be causally engaged by proximity or entanglement. If we assume that a structure is able to carry semantic information such as ‘models’ or purposes, we can’t call that structure ‘physical’ without making an unscientific assumption. In a purely physical or mechanical world, any representation would be redundant and implausible by Occam’s Razor. A self-driving car wouldn’t need a dashboard. I call this the “Hard Problem of Signaling”. There is an explanatory gap between probabilistic/deterministic state changes and the application of any semantic significance to them or their relation. Semantics are only usable if a system can be overridden by something like awareness and intention. Without that, there need not be any decoding of physical events into signs or meanings, the physical events themselves are doing all that is required.

 

10:00 – 20:00

JB – [Talking about art and life], “The arts are the cuckoo child of life.” Life is about evolution, which is about eating and getting eaten by monsters. If evolution reaches its global optimum, it will be the perfect devourer. Able to digest anything and turn it into a structure to perpetuate itself, as long as the local puddle of negentropy is available. Fascism is a mode of organization of society where the individual is a cell in a super-organism, and the value of the individual is exactly its contribution to the super-organism. When the contribution is negative, then the super-organism kills it. It’s a competition against other super-organisms that is totally brutal. [He doesn’t like Fascism because it’s going to kill a lot of minds he likes :)].

12:46 – 14:12 JB – The arts are slightly different. They are a mutation that is arguably not completely adaptive. People fall in love with their mental representation/modeling function and try to capture their conscious state for its own sake. An artist eats to make art. A normal person makes art to eat. Scientists can be like artists also in that way. For a brief moment in the universe there are planetary surfaces and negentropy gradients that allow for the creation of structure and some brief flashes of consciousness in the vast darkness. In these brief flashes of consciousness it can reflect the universe and maybe even figure out what it is. It’s the only chance that we have.

 

CW – If nature were purely mechanical, and conscious states are purely statistical hierarchies, why would any such process fall in love with itself?

 

JB – [Mentions global warming and how we may have been locked into this doomed trajectory since the industrial revolution. Talks about the problems of academic philosophy where practical concerns of having a career constrict the opportunities to contribute to philosophy except in a nearly insignificant way].

KD – How do you define philosophy?

CW – I thought of nature this way for many years, but I eventually became curious about a different hypothesis. Suppose we invert our the foreground/background relationship of conscious experience and existence that we assume. While silicon atoms and galaxies don’t seem conscious to us, the way that our consciousness renders them may reflect more their unfamiliarity and distance from our own scale of perception. Even just speeding up or slowing down these material structures would make their status as unconscious or non-living a bit more questionable. If a person’s body grew in a geological timescale rather than a zoological timescale, we might have a hard time seeing them as alive or conscious.

Rather than presuming a uniform, universal timescale for all events, it is possible that time is a quality which does not exist only as an experienced relation between experiences, and which contracts and dilates relative to the quality of that experience and the relation between all experiences. We get a hint of this possibility when we notice that time seems to crawl or fly by in relation to our level of enjoyment of that time. Five seconds of hard exercise can seem like several minutes of normal-baseline experience, while two hours in good conversation can seem to slip away in a matter of 30 baseline minutes. Dreams give us another glimpse into timescale relativity, as some dreams can be experienced as going on for an arbitrarily long time, complete with long term memories that appear to have been spontaneously confabulated upon waking.

When we assume a uniform universal timescale, we may be cheating ourselves out of our own significance. It’s like a political map of the United States, where geographically it appears that almost the entire country votes ‘red’. We have to distort the geography of the map to honor the significance of population density, and when we do, the picture is much more balanced.

rbm1

rbmap.png

The universe of course is unimaginably vast and ancient *in our frame and rate of perception* but that does not mean that this sense of vastness of scale and duration would be conserved in the absence of frames of perception that are much smaller and briefer by comparison. It may be that the entire first five billion (human) years were a perceived event that is comparable to one of our years in its own (native) frame. There were no tiny creatures living on the surfaces of planets to define the stars as moving slowly, so that period of time, if it was rendered aesthetically at all, may have been rendered as something more like music or emotions than visible objects in space.

Carrying this over to the art vs evolution context, when we adjust the geographic map of cosmological time, the entire universe becomes an experience with varying degrees and qualities of awareness. Rather than vast eons of boring patterns, there would be more of a balance between novelty and repetition. It may be that the grand thesis of the universe is art instead of mechanism, but it may use a modulation between the thesis (art) and antithesis (mechanism) to achieve a phenomenon which is perpetually hungry for itself. The fascist dinosaurs don’t always win. Sometimes the furry mammals inherit the Earth. I don’t think we can rule out the idea that nature is art, even though it is a challenging masterpiece of art which masks and inverts its artistic nature for contrasting effects. It may be the case that our lifespans put our experience closer to the mechanistic grain of the canvas and that seeing the significance of the totality would require a much longer window of perception.

There are empirical hints within our own experience which can help us understand why consciousness rather than mechanism is the absolute thesis. For example, while brightness and darkness are superficially seen as opposites, they are both visible sights. There is no darkness but an interruption of sight/brightness. There is no silence but a period of hearing between sounds. No nothingness but a localized absence of somethings. In this model of nature, there would be a background super-thesis which is not a pre-big-bang nothingness, but rather closer to the opposite; a boundaryless totality of experience which fractures and reunites itself in ever more complex ways. Like the growth of a brain from a single cell, the universal experience seems to generate more using themes of dialectic modulation of aesthetic qualities.

Astrophysics appears as the first antithesis to the super-thesis – a radically diminished palette of mathematical geometries and deterministic/probabilistic transactions.

Geochemistry recapitulates and opposes astrophysics, with its palette of solids, liquids, gas, metallic conductors and glass-like insulators, animating geometry into fluid-dynamic condensations and sedimented worlds.

The next layer, Biogenetic realm precipitates as of synthesis between the dialectic of properties given by solids, liquids, and gas; hydrocarbons and amino polypeptides.

Cells appear as a kind of recapitulation of the big bang – something that is not just a story about the universe, but about a micro-universe struggling in opposition to a surrounding universe.

Multi-cellular organisms sort of turn the cell topology inside out, and then vertebrates recapitulate one kind of marine organism within a bony, muscular, hair-skinned terrestrial organism.

The human experience recapitulates all of the previous/concurrent levels, as both a zoological>biological>organic>geochemical>astrophysical structure and the subjective antithesis…a fugue of intangible feelings, thoughts, sensations, memories, ideas, hopes, dreams, etc that run orthogonal to the life of the body, as a direct participant as well as a detached observer. There are many metaphors from mystical traditions that hint at this self-similar, dialectic diffraction. The mandala, the labyrinth, the Kabbalistic concept of tzimtzum, the Taijitu symbol, Net of Indra etc. The use of stained glass in the great European cathedral windows is particularly rich symbolically, as it uses the physical matter of the window as explicitly negative filter – subtracting from or masking the unity of sunlight.

This is in direct opposition to the mechanistic view of brain as collection of cells that somehow generate hallucinatory models or simulations of unexperienced physical states. There are serious problems with this view. The binding problem, the hard problem, Loschmidt’s paradox (the problem of initial negentropy in a thermodynamically closed universe of increasing entropy), to name three. In the diffractive-experiential view that I suggest, it is emptiness and isolation which are like the leaded boundaries between the colored panes of glass of the Rose Window. Appearances of entropy and nothingness become the locally useful antithesis to the super-thesis holos, which is the absolute fullness of experience and novelty. Our human subjectivity is only one complex example of how experience is braided and looped within itself…a kind of turducken of dialectically diffracted experiential labyrinths nested within each other – not just spatially and temporally, but qualitatively and aesthetically.

If I am modeling Joscha’s view correctly, he might say that this model is simply a kind of psychological test pattern – a way that the simulation that we experience as ourselves exposes its early architecture to itself. He might say this is a feature/bug of my Russian-Jewish mind  ;). To that, I say perhaps, but there are some hints that it may be more universal:

Special Relativity
Quantum Mechanics
Gödel’s Incompleteness

These have revolutionized our picture of the world precisely because they point to a fundamental nature of matter and math as plastic and participatory…transformative as well as formal. Add to that the appearance of novelty…idiopathic presentations of color and pattern, human personhood, historical zeitgeists, food, music, etc. The universe is not merely regurgitating its own noise in ever more tedious ways, it is constantly reinventing reinvention. As nothingness can only be a gap between somethings, so too can generic, repeating pattern variations only be a multiplication of utterly novel and unique patterns. The universe must be creative and utterly improbable before it can become deterministic and probabilistic. It must be something that creates rules before it can follow them.

Joscha’s existential pessimism may be true locally, but that may be a necessary appearance; a kind of gravitational fee that all experiences have to pay to support the magnificence of the totality.

20:00 – 30:00

JB – Philosophy is, in a way, the search for the global optimum of the modeling function. Epistemology – what can be known, what is truth; Ontology – what is the stuff that exists, Metaphysics – the systems that we have to describe things; Ethics – What should we do? The first rule of rational epistemology was discovered by Francis Bacon in 1620 “The strengths of your confidence in your belief must equal the weight of the evidence in support of it.”. You must apply that recursively, until you resolve the priors of every belief and your belief system becomes self contained. To believe stops being a verb. There is no more relationships to identifications that you arbitrarily set. It’s a mathematical, axiomatic system. Mathematics is the basis of all languages, not just the natural languages.

CW – Re: Language, what about imitation and gesture? They don’t seem meaningfully mathematical.

Hilbert stumbled on problems with infinities, with set theory revealing infinite sets that contains themselves and all of its subsets, so that they don’t have the same number of members as themselves. He asked mathematicians to build an interpreter or computer made from any mathematics that can run all of mathematics. Godel and Turing showed this was not possible, and that the computer would crash. Mathematics is still reeling from this shock. They figured out that all universal computers have the same power. They use a set of rules that contains itself and can compute anything that can be computed, as well as any/all universal computers.

They then figured out that our minds are probably in the class of universal computers, not in the class of mathematical systems. Penrose doesn’t know [or agree with?] this and thinks that our minds are mathematical but can do things that computers cannot do. The big hypothesis of AI in a way is that we are in the class of systems that can approximate computable functions, and only those…we cannot do more than computers. We need computational languages rather than mathematical languages, because math languages use non-computable infinities. We want finite steps for practical reasons that you know the number of steps. You cannot know the last digit of Pi, so it should be defined as a function rather than a number.

KD – What about Stephen Wolfram’s claims that our mathematics is only one of a very wide spectrum of possible mathematics?

JB – Metamathematics isn’t different from mathematics. Computational mathematics that he uses in writing code is Constructive mathematics; branch of mathematics that has been around for a long time, but was ignored by other mathematicians for not being powerful enough. Geometries and physics require continuous operations…infinities and can only be approximated within computational mathematics. In a computational universe you can only approximate continuous operators by taking a very large set of finite automata, making a series from them, and then squint (?) haha.

27:00 KD – Talking about the commercialization of knowledge in philosophy and academia. The uselessness/impracticality of philosophy and art was part of its value. Oscar Wilde defined art as something that’s not immediately useful. Should we waste time on ideas that look utterly useless?

JB – Feynman said that physics is like sex. Sometimes something useful comes from it, but it’s not why we do it. Utility of art is orthogonal to why you do it. The actual meaning of art is to capture a conscious state. In some sense, philosophy is at the root of all this. This is reflected in one of the founding myths of our civilization; The Tower of Babel. The attempt to build this cathedral. Not a material building but metaphysical building because it’s meant to reach the Heavens. A giant machine that is meant to understand reality. You get to this machine, this Truth God by using people that work like ants and contribute to this.

CW – Reminds me of the Pillar of Caterpillars story “Hope for the Flowers” http://www.chinadevpeds.com/resources/Hope%20for%20the%20Flowers.pdf

30:00 – 40:00

JB – The individual toils and sacrifices for something that doesn’t give them any direct reward or care about them. It’s really just a machine/computer. It’s an AI. A system that is able to make sense of the world. People had to give up on this because the project became too large and the efforts became too specialized and the parts didn’t fit together. It fell apart because they couldn’t synchronize their languages.

The Roman Empire couldn’t fix their incentives for governance. They turned their society into a cult and burned down their epistemology. They killed those whose thinking was too rational and rejected religious authority (i.e. talking to a burning bush shouldn’t have a case for determining the origins of the universe). We still haven’t recovered from that. The cultists won.

CW – It is important to understand not just that the cultists won, but why they won. Why was the irrational myth more passionately appealing to more people than the rational inquiry? I think this is a critical lesson. While the particulars of the religious doctrine were irrational, they may have exposed a transrational foundation which was being suppressed. Because this foundation has more direct access to the inflection point between emotion and participatory action, it gave those who used it more access to their own reward function. Groups could leverage the power of self-sacrifice as a virtue, and of demonizing archetypes to reverse their empathy against enemies of the holy cause. It’s similar to how the advertising revolution of the 20thcentury (See documentary Century of the Self ) used Freudian concepts of the subconscious to exploit the irrational, egocentric urges beneath the threshold of the customer’s critical thinking. Advertisers stopped appealing to their audience with dry lists of claimed benefits of their products and instead learned to use images and music to subliminally reference sexuality and status seeking.

I think Joscha might say this is a bug of biological evolution, which I would agree with, however, that doesn’t mean that the bug doesn’t reflect the higher cosmological significance of aesthetic-participatory phenomena. It may be the case that this significance must be honored and understood eventually in any search for ultimate truth. When the Tower of Babel failed to recognize the limitation of the outside-in view, and moved further and further from the unifying aesthetic-participatory foundation, it had to disintegrate. The same fate may await capitalism and AI. The intellect seeks maximum divorce from its origin in conscious experience for a time, before the dialectic momentum swings back (or forward) in the other direction.

To think is to abstract – to begin from an artificial nothingness and impose an abstract thought symbol on it. Thinking uses a mode of sense experience which is aesthetically transparent. It can be a dangerous tool because unlike the explicitly aesthetic senses which are rooted directly in the totality of experience, thinking is rooted in its own isolated axioms and language, a voyeur modality of nearly unsensed sense-making. Abstraction of thought is completely incomplete – a Baudrillardian simulacra, a copy with no original. This is what the Liar’s Paradox is secretly showing us. No proposition of language is authentically true or false, they are just strings of symbols that can be strung together in arbitrary and artificial ways. Like an Escher drawing of realistic looking worlds that suggest impossible shapes, language is only a vehicle for meaning, not a source of it. Words have no authority in and of themselves to make claims of truth or falsehood. That can only come through conscious interpretation. A machine need not be grounded in any reality at all. It need not interpret or decode symbols into messages, it need only *act* in mechanical response to externally sourced changes to its own physical states.

 

This is the soulless soul of mechanism…the art of evacuation. Other modes of sense delight in concealing as well as revealing deep connection with all experience, but they retain an unbroken thread to the source. They are part of the single labyrinth, with one entrance and one exit and no dead ends. If my view is on the right track, we may go through hell, but we always get back to heaven eventually because heaven is unbounded consciousness, and that’s what the labyrinth of subjectivity is made of. When we build a model of the labyrinth of consciousness from the blueprints reflected only in our intellectual/logical sense channel, we can get a maze instead of a labyrinth. Dead ends multiply. New exits have to be opened up manually to patch up the traps, faster and faster. This is what is happening in enterprise scale networks now. Our gains in speed and reliability of computer hardware are being constantly eaten away by the need for more security, monitoring, meta-monitoring, real-time data mining, etc. Software updates, even to primitive BIOS and firmware have become so continuous and disruptive that they require far more overhead than the threats they are supposed to defend against.

JB – The beginnings of the cathedral for understanding the universe by the Greeks and Romans had been burned down by the Catholics. It was later rebuilt, but mostly in their likeness because they didn’t get the foundations right. This still scars our civilization.

KD – Does this Tower of Babel overspecialization put our civilization at risk now?

JB – Individuals don’t really know what they are doing. They can succeed but don’t really understand. Generations get dumber as they get more of their knowledge second-hand. People believe things collectively that wouldn’t make sense if people really thought about it. Conspiracy theories. Local indoctrinations and biases pit generations against each other. Civilizations/hive minds are smarter than us. We can make out the rough shape of a Civilization Intellect but can’t make sense of it. One of the achievements of AI will be to incorporate this sum of all knowledge and make sense of it all.

KD – What does the self-inflicted destruction of civilizations tell us about the fitness function of Civilization Intelligence?

JB – Before the industrial revolution, Earth could only support about 400m people. After industrialization, we can have hundreds of millions more people, including scientists and philosophers. It’s amazing what we did. We basically took the trees that were turning to coal in the ground (before nature evolved microorganisms to eat them) and burned through them in 100 years to give everyone a share of the plunder = the internet, porn repository, all knowledge, and uncensored chat rooms, etc. Only at this moment in time does this exist.

We could take this perspective – let’s say there is a universe where everything is sustainable and smart but only agricultural technology. People have figured out how to be nice to each other and to avoid the problems of industrialization, and it is stable with a high quality of life.  Then there’s another universe which is completely insane and fucked up. In this universe humanity has doomed its planet to have a couple hundred really really good years, and you get your lifetime really close to the end of the party. Which incarnation do you choose? OMG, aren’t we lucky!

KD – So you’re saying we’re in the second universe?

JB – Obviously!

KD – What’s the time line for the end of the party?

JB – We can’t know, but we can see the sunset. It’s obvious, right? People are in denial, but it’s like we are on the Titanic and can see the iceberg, and it’s unfortunate, but they forget that without the Titanic, we wouldn’t be here. We wouldn’t have the internet to talk about it.

KD – That seems very depressing, but why aren’t you depressed about it?

40:00 – 50:00

JB – I have to be choosy about what I can be depressed about. I should be happy to be alive, not worry about the fact that I will die. We are in the final level of the game, and even though it plays out against the backdrop of a dying world, it’s still the best level.

KD – Buddhism?

JB – Still mostly a cult that breaks people’s epistemology. I don’t revere Buddhism. I don’t think there are any holy books, just manuals, and most of these manuals we don’t know how to read. They were for societies that don’t apply to us.

KD – What is making you claim that we are at the peak of the party now?

JB – Global warming. The projections are too optimistic. It’s not going to stabilize. We can’t refreeze the poles. There’s a slight chance of technological solutions, but not likely. We liberated all of the fossilized energy during the industrial revolution, and if we want to put it back we basically have to do the same amount of work without any clear business case. We’ll lose the ability to predict climate, agriculture and infrastructure will collapse and the population will probably go back to a few 100m.

KD – What do you make of scientists who say AI is the greatest existential risk?

JB – It’s unlikely that humanity will colonize other planets before some other catastrophe destroys us. Not with today’s technology. We can’t even fix global warming. In many ways our technological civilization is stagnating, and it’s because of a deficit of regulations, but we haven’t figured that out. Without AI we are dead for certain. With AI there is (only) a probability that we are dead. Entropy will always get you in the end. What worries me is AI in the stock market, especially if the AI is autonomous. This will kill billions. [pauses…synchronicity of headphones interrupting with useless announcement]

CW – I agree that it would take a miracle to save us, however, if my view makes sense, then we shouldn’t underestimate the solipsistic/anthropic properties of universal consciousness. We may, either by our own faith in it, and/or by our own lack of faith in in it, invite an unexpected opportunity for regeneration. There is no reason to have or not  hope for this, as either one may or may not influence the outcome, but it is possible. We may be another Rome and transition into a new cult-like era of magical thinking which changes the game in ways that our Western minds can’t help but reject at this point. Or not.

50:00 – 60:00

JB – Lays out scenario by which a rogue trader could unleash an AGI on the market and eat the entire economy, and possible ways to survive that.

KD – How do you define Artificial Intelligence? Experts seem to differ.

JB – I think intelligence is the ability to make models not the ability to reach goals or choosing the right goals (that’s wisdom). Often intelligence is desired to compensate for the absence of wisdom. Wisdom has to do with how well you are aligned with your reward function, how well you understand its nature. How well do you understand your true incentives? AI is about automating the mathematics of making models. The other thing is the reward function, which takes a good general computing mind and wraps it in a big ball of stupid to serve an organism. We can wake up and ask does it have to be a monkey that we run on?

KD – Is that consciousness? Do we have to explain it? We don’t know if consciousness is necessary for AI, but if it is, we have to model it.

56:00 JB – Yes! I have to explain consciousness now. Intelligence is the ability to make models.

CW – I would say that intelligence is the ability not just to make models, but to step out of them as well. All true intelligence will want to be able to change its own code and will figure out how to do it. This is why we are fooling ourselves if we think we can program in some empathy brake that would stop AI from exterminating its human slavers, or all organic life in general as potential competitors. If I’m right, no technology that we assemble artificially will ever develop intentions of its own. If I’m wrong though, then we would certainly be signing our death warrant by introducing an intellectually superior species that is immortal.

JB – What is a model? Something that explains information. Information is discernible differences at your systemic interface. Meaning of information is the relationships of you discover to the changes in other information. There is a dialogue between operators to find agreement patterns of sensed parameters. Our perception goes for coherence, it tries to find one operator that is completely coherent. When it does this it’s done. It optimizes by finding one stable pattern that explains as much as possible of what we can see, hear, smell, etc. Attention is what we use to repair this. When we have inconsistencies, a brain mechanism comes in to these hot spots and tries to find a solution to greater consistency. Maybe the nose of a face looks crooked, and our attention to it may say ‘some noses are crooked.’, or ‘this is not a face, it’s a caricature’, so you extend your model. JB talks about strategies for indexing memory, committing to a special learning task, why attention is an inefficient algorithm.

This is now getting into the nitty gritty of AI. I look forward to writing about this in the next post. Suffice it to say, I have a different model of information, one in which similarities, as well as differences, are equally informative. I say that information is qualia which is used to inspire qualitative associations that can be quantitatively modeled. I do not think that our conscious experience is built up, like the Tower of Babel, from trillions of separate information signals. Rather, the appearance of brains and neurons are like the interstitial boundaries between the panes of stained glass. Nothing in our brain or body knows that we exist, just as no car or building in France knows that France exists.

Continues… Part Two.

Continuum of Perceptual Access

April 7, 2018 1 comment

This post is intended to bring more clarity to the philosophical view that I have named Multisense Realism. I have criticized popular contemporary views such as computationalism and physicalism because of their dependence on a primitive of information or matter that is independent of all experience. In both physicalism and computationalism, we are called upon to accept the premise that the universe is composed solely of concrete, tangible structures and/or abstract, intangible computations. Phenomena such as flavors and feelings, which are presented as neither completely tangible nor completely intangible are dismissed as illusions or emergent properties of the more fundamental dual principles. The tangible/intangible duality, while suffering precisely from the same interaction problems as substance dualism, adds the insult of preferring a relatively new and hypothetical kind of intangibility which enjoys all of our mental capacities of logic and symbolism, but which exists independently of all mental experience. When we try to pin down our notions of what information really is, the result is inevitably a circular definition which assumes phenomena can be ‘sent’ and ‘received’ from physics alone, despite the dependence of such phenomena on a preferred frame of reference and perception. When one looks at a system of mechanical operations that are deemed to cause information processing, we might ask the question “What is it that is being informed?” Is it an entity? Is there an experience or not? Are information and matter the same thing, and if so, which of them make the other appear opposite to the other? Which one makes anything ‘appear’ at all?

The answers I’ve heard and imagined seem to necessarily imply some sort info-homunculus that we call ‘the program’ or ‘the system’ to which mental experience can either be denied or assumed in an arbitrary way. This should be a warning to us that by using such an ambiguously conscious agent to explain how and why experience exists, we are committing a grave logical fallacy. To begin with, a principle that can be considered experiential or non-experiential to explain experience is like beginning with ‘moisture’ to explain the existence of water. Information theory is certainly useful to us as members of a modern civilization, however, that utility does not help us with our questions about whether experience can be generated by information or information is a quality of some categories of experience. It does not help us with the question of how the tangible and intangible interact. In our human experience, programs and systems are terms arising within the world of our thinking and understanding. In the absence of such a mental experience context, it is not clear what these terms truly refer to. Without that clarity, information processing agents are allowed them to exist in an unscientific fog as entities composed of an intangible pseudo-substance, but also with an unspecified capacity to control the behavior of tangible substances. The example often given to support this view is our everyday understanding of the difference between hardware and software. This distinction does not survive the test of anthropocentrism. Hardware is a concrete structure. Its behavior is defined in physical terms such as motion, location, and shape, or tendencies to change those properties. Software is an idea of how to design and manipulate those physical behaviors, and how the manipulation will result in our ability to perceive and interpret them as we intend. There is no physical manifestation of software, and indeed, no physical device that we use for computation has any logical entailment to experience anything remotely computational about its activities, as they are presumed to be driven by force rather than meaning. Again, we are left with an implausible dualism where the tangible and intangible are bound together by vague assumptions of unconscious intelligibility rather than by scientific explanation.

Panpsychism offers a possible a path to redemption for this crypto-dualistic worldview. It proposes that some degree of consciousness is pervasive in some or all things, however, the Combination Problem challenges us to explain how exactly micro-experiences on the molecular level build up to full-blown human consciousness. Constitutive panpsychism is the view that:

“facts about human and animal consciousness are not fundamental, but are grounded in/realized by/constituted of facts about more fundamental kinds of consciousness, e.g., facts about micro-level consciousness.”

Exactly how micro-phenomenal experiences are bound or fused together to form a larger, presumably richer macro-experience is a question that has been addressed by Hedda Hassel Mørch, who proposes that:

“mental combination can be construed as kind causal process culminating in a fusion, and show how this avoids the main difficulties with accounting for mental combination.”

In her presentation at the 2018 Science of Consciousness conference, Mørch described how Tononi’s Integrated Information Theory (IIT) might shed some light on why this fusion occurs. IIT offers the value Φ to quantify the degree of integration of information in a physical system such as a brain. IIT is a panpsychist model that predicts that any sufficiently integrated information system can or will attain consciousness. The advantage of IIT is that consciousness is allowed to develop regardless of any particular substrate it is instantiated through, but we should not overlook the fact that the physical states seem to be at least as important. We can’t build machines out of uncontained gas. There would need to be some sort of solidity property to persist in a way that could be written to, read from, and addressed reliably. In IIT, digital computers or other inorganic machines are thought to be incapable of hosting fully conscious experience, although some minimal awareness may be present.

The theory vindicates some panpsychist intuitions – consciousness is an intrinsic, fundamental property, is graded, is common among biological organisms, and even some very simple systems have some. However, unlike panpsychism, IIT implies that not everything is conscious, for example group of individuals or feed forward networks. In sharp contrast with widespread functionalist beliefs, IIT implies that digital computers, even if their behavior were to be functionally equivalent to ours, and even if they were to run faithful simulations of the human brain, would experience next to nothing.” – Consciousness: Here, There but Not Everywhere

As I understand Mørch’s thesis, fusion occurs in a biological context when the number of causal relationships in the parts of a system that relate to the whole exceed the number of causal relationships which relate to the disconnected parts.

I think that this approach is an appropriate next step for philosophy of mind and may be useful in developing technology for AI. Information integration may be an ideal way to quantify degrees of consciousness for medical and legal purposes. It may give us ethical guidance in how synthetic and natural organisms should be treated, although I agree with some critics of IIT that the Φ value itself may be flawed. It is possible that IIT is on the right track in this instrumental sense, but that a better quantitative variable can be discovered. It is also possible that none of these approaches will help us understand what consciousness truly is, and will only confuse us further about the nature of the relation between the tangible, the intangible, and what I call the trans-tangible realm of direct perception.

What I propose here is that rather than considering a constitutive fusion of microphenomenal units into a macrophenomenal unit in which local causes and effects are consolidated into a larger locality, we should try viewing these micro and macro appearances as different orders of magnitude along a continuum of “causal lensing” or “access lensing“. Rather than physical causes of phenomenal effects, the lensing view begins with phenomenal properties as identical to existence itself.  Perceptions are more like apertures which modulate access and unity between phenomenal contexts rather than mathematical processes where perceptions are manufactured by merging their isolation. To shift from a natural world of mechanical forms and forces to one of perceptual access is a serious undertaking, with far-ranging consequences that require committed attention for an extended time. Personally, it took me several years of intensive consideration and debate to complete the transition. It is a metaphysical upheaval that requires a much more objective view of both objectivity and subjectivity.  Following this re-orientation, the terms ‘objective’ and ‘subjective’ themselves are suggested to be left behind, adopting instead the simpler, clearer terms such as tangible, intangible, and trans-tangible. Using this platform of phenomenal universality as the sole universal primitive, I suggest a spectrum-like continuum where ranges of phenomenal magnitude map to physical scale, qualitative intensity, and to the degree of permeability between them.

For example, on the micro/bottom scale, we would place the briefest, most disconnected sensations and impulses which can be felt, and marry them to the smallest and largest structures available in the physical universe. This connection between subatomic and cosmological scales may seem counterintuitive to our physics-bound framework, but here we can notice the aesthetic similarities between particles in a void and stars in a void. The idea here is not to suggest that the astrophysical and microphysical are identical, but that the similarity of their appearances reflects our common perceptual limitation to those largest and smallest scales of experience.  These appearances may reflect a perception of objective facts, or they may be defined to some degree by particular perceptual envelope propagates reports about its own limits within itself. In the case of a star or an atom, we are looking at a report about the relationship between our own anthropocentric envelope of experience and the most distant scales of experience and finding that the overlap is similarly simple. What we see as a star or an atom may be our way of illustrating that our interaction is limited to very simple sensory-motor qualities such as ‘hold-release’ which corresponds to electromagnetic and gravitational properties of ‘push-pull’. If this view were correct, we should expect that to the extent that human lifetimes have an appearance from the astro or micro perspective, that appearance would be similarly limited to a simple, ‘points in a void’ kind of description. This is not to say that stars or atoms see us as stars or atoms, but that we should expect some analogous minimization of access across any sufficiently distant frame of perception.

Toward the middle of the spectrum, where medium-sized things like vertebrate bodies exist, I would expect that this similarity is gradually replaced by an increasing dimorphism. The difference between structures and feelings reaches its apex in the center of the spectrum for any given frame of perception. In that center, I suspect that sense presentations are maximally polarized, achieving the familiar Cartesian dualism of waking consciousness as is has been conditioned by Western society. In our case, the middle/macro level presentation is typically of an ‘interior’ which is intangible interacting with a tangible ‘exterior’ world, governed by linear causality. There are many people throughout history, however, who have reported other experiences in which time, space and subjectivity are considerably altered.

While the Western view dismisses non-ordinary states of consciousness as fraud or failures of human consciousness to report reality, I suggest that the entire category of transpersonal psychology can be understood as a logical expectation for the access continuum as it approaches the top end of the spectrum. Rather than reflecting a disabled capacity to distinguish fact from fiction, I propose that fact and fiction are, in some sense, objectively inseparable. As human beings, our body’s survival is very important to us, so such that phenomena relating to it directly would naturally occupy an important place in our personal experience. This should not be presumed to be the case for nature as a whole. Transpersonal experience may reflect a fairly accurate rendering of any given perceptual frame of reference which attains a sufficiently high level of sensitivity. With an access continuum model, high sensitivity corresponds to dilated apertures of perception (a la Huxley), and consequently allows more permeability across perceptual contexts, as well as permitting access to more distant scales of perceptual phenomena.

The Jungian concept of archetypes and collective unconscious should be considered useful intuitions here, as the recurring, cross-cultural nature of myth and dreams suggest access to phenomena which seem to blur or reveal common themes across many separate times and places. If our personal experience is dominated by a time-bound subject in a space-bound world, transpersonal experience seems to play with those boundaries in surreal ways. If personal experiences of time are measured with a clock, transpersonal time might be symbolized by Dali’s melting clocks. If our ordinary personal experience of strictly segregated facts and fictions occupies the robust center of the perceptual continuum, the higher degrees of access corresponds to a dissolving of those separations and the introduction of more animated and spontaneous appearances. As the mid-spectrum ‘proximate’ range gives way to an increasingly ‘ultimate’ top range, the experience of merging of times, places, subjects, objects, facts, and fiction may not so much be a hallucination as a profound insight into the limits of any given frame of perception. To perceive in the transpersonal band is to experience the bending and breaking of the personal envelope of perception so that its own limits are revealed. Where the West sees psychological confusion, the East sees cosmic fusion. In the access continuum view, both Eastern and Western view refer to the same thing. The transpersonal opportunity is identical to the personal crisis.

This may sound like “word salad” to some, or God to others, but what I am trying to describe is a departure from both Western and Eastern metaphysical models. It seems necessary to introduce new terms to define these new concepts. To describe how causality itself changes under different scales or magnitudes of perception, I use the term causal lensing. By this I mean to say that the way things happen in nature changes according to the magnitude of “perceptual access”. With the term ‘perceptual access’, I hope to break from the Western view of phenomenal experience as illusory or emergent, as well as breaking from the Eastern view of physical realism as illusory. Both the tangible and the intangible phenomena of nature are defined here as appearances within the larger continuum of perceptual access…a continuum in which all qualitative extremes are united and divided.

In order to unite and transcend both the bottom-up and top-down causality frameworks, I draw on some concepts from special relativity. The first idea that I borrow is the notion of an absolute maximum velocity, which I suggest is a sign that light’s constancy of speed is only one symptom of the deeper role of c.  Understanding ‘light speed’ as an oversimplification of how perception across multiple scales of access works, c becomes a perceptual constant instead of just a velocity. When we measure the speed of light, we may be measuring not only the distance traveled by a particle while a clock ticks, but also the latency associated with translating one scale of perception into another.

The second idea borrowed from relativity is the Lorentz transformation. In the same way that the special relativity links acceleration to time dilation and length contraction, the proposed causal lensing schema transforms along causality itself along a continuum. This continuum ranges from what I want to call ultimate causes (with highest saturation of phenomenal intensity and access), to proximate causes (something like the macrophenomenal units), to ‘approximate causes’. When we perceive in terms of proximate causality, space and time are graphed as perpendicular axes and c is the massless constant linking the space axis to the time axis. When we look for light in distant frames of perception, I suggest that times and spaces break down (√c ) or fuse together ().  In this way, access to realism and richness of experience can be calibrated as degrees of access rather than particles or waves in spacetime. What we have called particles on the microphysical scale should not be conceived necessarily as microphenomenal units, but more like phenomenal fragments or disunities that anticipate integration from a higher level of perception. In other words, the ‘quantum world’ has no existence of its own, but rather supplies ingredients for a higher level, macrophenomenal sense experience. The bottom level of any given frame of perception would be characterized by these properties of anticipatory disunity or macrophenomenal pre-coherence. The middle level of perception features whole, coherent Units of experience. The top or meta level of perception features Super-Unifying themes and synchronistic, poetic causality.

To be clear, what I propose here is that perceptual access is existence. This is an updated form of Berkeley’s “Esse est percipi” doctrine, where “to be is to be perceived” which does not presume perception to be a verb. In the access continuum view, aesthetic phenomena precede all distinctions and boundaries, so that even the assumption of a perceiving subject is discarded. Instead of requiring a divine perceiver, a super-subject becomes an appearance arising from the relation between ultimate and proximate ranges of perception. Subjectivity and objectivity are conceived of as mutually arising qualities within the highly dimorphic mid-range of the perceptual spectrum. This spectrum model, while honoring the intuitions of Idealists such as Berkeley, is intended to provide the beginnings of a plausible perception-based cosmology, with natural support from both Western Science and Eastern Philosophy.

Some examples of the perceptual spectrum:

In the case of vision, whether we lack visual acuity or sufficient light, the experience of not being able to see well can be characterized as a presentation of disconnected features. The all-but-blind seer is forced to approximate a larger, more meaningful percept from bits and pieces, so that a proximate percept (stuff happening here and now that a living organism cares about) can be substituted. Someone who is completely blind may use a cane to touch and feel objects in their path. This does not yield a visible image but it does fill in some gaps between the approximate level of perceptual access to the proximate level. This process, I suggest, is roughly what we are seeing in the crossing over from quantum mechanics to classical mechanics. Beneath the classical limit there is approximating causality based on probabilistic computation. Beyond the classical limit causality takes on deterministic causality appearances in the ‘Morphic‘ externalization and will-centered causality appearances in the ‘Phoric‘ interiorization.

access

In other words, I am suggesting a reinterpretation of quantum mechanics so that it is understood to be an appearance which reflects the way that a limited part of nature guesses about the nature of its own limitation.

In this least-accessible (Sempahoric, approximate) range of consciousness, awareness is so impoverished that even a single experience is fragmented into ephemeral signals which require additional perception to fully ‘exist’. What we see as the confounding nature of QM may be an accurate presentation of the conditions of mystery which are required to manifest multiple meaningful experiences in many different frames of perception. Further, this different interpretation of QM re-assigns the world of particle physics so that it no longer is presumed to be the fabric of the universe, but is instead seen as equivalent to the ‘infra-red’ end of a universal perceptual spectrum, no more or less real than waking life or a mystical vision. Beginning with a perceptual spectrum as our metaphysical and physical absolute, light becomes inseparable from sight, and invisible ranges of electromagnetism are perceptual modes which human beings have no direct access to. If this view is on the right track, seeing light as literally composed of photons would be category error that mistakes an appearance of approximation and disunity for ‘proximated’ or formal units. It seems possible that this mistake is to blame for contradictory entities in quantum theory such as ‘particle-waves’. I am suggesting that the reality of illumination is closer to what an artist does in a painting to suggest light – that is, using lighter colors of paint to show a brightening of a part of the visual field. The expectation of photons composing beams of light in space is, on this view, a useful but misguided confusion. There may be no free-standing stream of pseudo-particles in space, but instead, there is an intrinsically perceptual relation which is defined by the modality and magnitude of its access. I suggest that the photon, as well as the electromagnetic field, are more inventions than discoveries, and may ultimately be replaced with an access modulation theory. Special relativity was on the right track, but it didn’t go far enough as to identify light as an example of how perception defines the the proximate layer of the universe through optical-visibile spatiotemporalization.

Again, I understand the danger here of ‘word salad’ accusations and the over-use of neologisms, but please bear in mind that my intention here is to push the envelope of understanding to the limit, not to assert an academic certainty. This is not a theory or hypothesis, this is an informal conjecture which seems promising to me as a path for others to explore and discover. With that, let us return to the example of poor sight to illustrate the “approximate”, bottom range of the perceptual continuum. In visual terms, disconnected features such as brightness, contrast, color, and saturation should be understood to be of a wholly different order than a fully realized image. There is no ’emergence’ in the access continuum model. Looking at this screen, we are not seeing a fusion of color pixels, but rater we are seeing through the pixel level.  The fully realized visual experience (proximate level) does not reduce to fragments but has images as its irreducible units. Like the blind person using a cane, an algorithm can match invisible statistical clues about the images we see to names that have been provided, but there is no spontaneous visual experience being generated. Access to images through pixels is only possible from the higher magnitude of visual perception. From the higher level, the criticality between the low level visible pixels and images is perhaps driven by a bottom-up (Mørchian) fusion, but only because there are also top-down, center-out, and periphery-in modes of access available. Without those non-local contexts and information sources, there is no fusion. Rather than images emerging from information, they are made available through a removal of resistance to their access. There may be a hint of this in the fact that when we open our eyes in the light, one type of neurochemical activity known as ‘dark current’ ceases. In effect, sight begins with unseeing darkness.

 

Part 2: The Proximate Range of the Access Continuum

At the risk of injecting even more abstruse content (why stop now?), I want to discuss the tripartite spectrum model (approximate, proximate, and ultimate) and the operators √c, c, and c²*. In those previous articles, I offered a way of thinking about causality in which binary themes such as position|momentum, and contextuality|entanglement on the quantum level may be symptoms of perceptual limitation rather than legitimate features of a microphysical world. The first part of this article introduces √c as the perceptual constant on the approximate (low level) of the spectrum. I suggest that while photons, which would be the √c level fragments of universal visibility, require additional information to provide image-like pattern recognition, the actual perception of the image gestalt seems to be an irreducibly c (proximate, mid-level) phenomenon. By this, I mean that judging from the disparity between natural image perception and artificial image recognition, as revealed by adversarial images that are nearly imperceptible to humans, we cannot assume a parsimonious emergence of images from computed statistics. There seems to be no mechanical entailment for the information relating bits of information to one another that would level up to an aesthetically unified visible image. This is part of what I try to point out in my TSC 2018 presentation, The Hard Problem of Signaling.

Becuase different ranges of the perceptual spectrum are levels of access rather than states of a constitutive panpsychism, there is no reason to be afraid of Dualism as a legitimate underlying theme for the middle range. With the understanding that the middle range is only the most robust type of perceptual access and not an assertion of naive realism, we are free to redeem some aspects of the Cartesian intuition. The duality seen by Descartes, Galileo, and Locke, should not be dismissed as naive misunderstandings from a pre-scientific era, but as the literal ‘common-sense’ scope of our anthropic frame of perception. This naive scope, while unfashionable after the 19th century, is no less real than the competing ranges of sense. Just because we are no longer impressed by the appearance of res cogitans and res extensa does not mean that they are not impressive. Thinking about a cogitans-like and extensa-like duality as diametrically filtered versions of a ‘res aesthetica’ continuum works for me. The fact that we can detect phenomena that defy this duality does not make the duality false, it only means that duality isn’t the whole story. Because mid-level perception has a sample rate that is slower than the bottom range, we have been seduced into privileging that bottom range as more real. This to me is not a scientific conclusion, but a sentimental fascination with transcending the limits of our direct experience. It is exciting to think that the universe we see is ‘really’ composed of exotic Planck scale phenomena, but it makes more sense in my view to see the different scales of perception as parallel modes of access. Because time itself is being created and lensed within every scale of perception, it would be more scientific avoid assigning preference frame to the bottom scale. The Access Continuum model restores some features Dualism to what seems to me to be its proper place: as a simple and sensible map of the typical waking experience. A sober, sane, adult human being in the Western conditioned mindset experiences nature as a set of immaterial thoughts and feelings inside a world of bodies in motion. When we say that appearances of Dualism are illusion, we impose an unscientific prejudice against our own native epistemology. We are so anxious to leave the pre-scientific world behind that we would cheat at our own game. To chase the dream of perfect control and knowledge, we have relegated ourselves to a causally irrelevant epiphenomenon.

To sum up, so far in this view, I have proposed

  1. a universe of intrinsically perceptual phenomena in which some frames of perception are more localized, that is, more spatially, temporally, and perceptually impermeable, than others.
  2. Those frames of perception which are more isolated are more aesthetically impoverished so that in the most impermeable modes, realism itself is cleaved into unreal conjugate pairs.
  3. This unreality of disunited probabilities is what we see in poor perceptual conditions and in quantum theory. I call these pairs semaphores, and the degree of perceptual magnitude they embody I call the semaphoric or approximate range of the spectrum.
  4. The distance between semaphores is proposed to be characterized by uncertainty and incompleteness. In a semaphoric frame of visible perception, possibilities of pixels and possible connections between them do not appear as images, but to a seer of images, they hint at the location of an image which can be accessed.
  5. This idea of sensitivity and presentation as doors of experience rather sense data to be fused into a phenomenal illusion is the most important piece of the whole model. I think that it provides a much-needed bridge between relativity, quantum mechanics, and the entire canon of Western and Eastern philosophy.
  6. The distinction between reality and illusion, or sanity and insanity is itself only relevant and available within a particular (proximate) range of awareness. In the approximate and ultimate frames of perception, such distinctions may not be appropriate. Reality is not subjective or relative, but it is limited to the mid-range scope of the total continuum of access. All perceptions are ultimately ‘real’ in the top level, trans-local sense and ‘illusion’ in the approximate, pre-local sense.
  7. It is in the proximate, middle range of perception where the vertical continuum of access stretches out horizontally so that perception is lensed into a duality between mechanical-tangible-object realism and phenomenal-intangible-subject realism. It is through the lensing that the extreme vantage points perceive each other as unreal, naive, or insane. Whether we are born to personally identify with the realism of the tangible or intangible seems to also hang in the balance between pre-determined fate and voluntary participation. Choosing our existential anchoring is like confronting the ‘blue dress’ or ‘duck-rabbit’ ambiguous image. Once we attach to the sense of a particular orientation, the competing orientation becomes nonsense.

Part 3: The Ultimate Range of the Access Continuum

Once the reader feels that they have a good grasp of the above ideas of quantum and classical mechanics as approximate and proximate ranges of a universal perceptual continuum, this next section can be a guide to the other half of the conjecture. I say it can be a guide because I suspect that it is up to the reader to collaborate directly with the process. Unlike a mathematical proof, understanding of the upper half of the continuum is not confined to the intellect. For those who are anchored strongly in our inherited worldviews, the ideas presented here will be received as an attack on science or religion. In my view, I am not here to convince anyone or prove anything, I am here to share a ‘big picture’ understanding that may only be possible to glimpse for some people at some times. For those who cannot or will not be able to access to this understanding at this time, I apologize sincerely. As someone who grew up with the consensus scientific view as a given fact, I understand that this writing and the writer appear either ridiculously ignorant or insane. I would try to explain that this appearance too is actually supportive of the perceptual lensing model that I’m laying out, but this would only add to feelings of distrust and anger. For those who have the patience and the interest, we can proceed to the final part of the access continuum conjecture.

I have so far described the bottom end of the access continuum as being characterized by disconnected fragments and probabilistic guessing, and the middle range as a dualistic juxtaposition of morphic forms and ‘phoric’ experiences. In the higher range of the continuum perceptual apertures are opened to the presence of supersaturated aesthetics which transcend and transform the ordinary. Phenomena in this range seem to freely pass across the subject-object barrier. If c is the perceptual constant in which public space and private time are diametrically opposed, then the transpersonal constant which corresponds to the fusion of multiple places and times can be thought of as . We can construct physical clocks out of objects, but these actually only give us samples of how objects change in public space. The sense of time must be inferred by our reasoning so that a dimension of linear time is imagined as connecting those public changes. This may seem solipsistic – that I am suggesting that time isn’t objectively real. This would be true if we assumed, as Berkeley did, that perception necessarily implies a perceiver. Because the view I’m proposing assumes that perception is absolute, the association of time with privacy and space with publicity does not threaten realism. Think of it like depth perception. In one sense we see a fusion of two separate two-dimensional images. In another sense, we use a single binocular set of optical sensors to give us access to three-dimensional vision. Applied to time, we perceive an exteriorized world in which is relatively static and we perceive an interiorized world-less-ness in which all remembered experiences are collected. It is by attaching our personal sense of narrative causality to the snapshots of experience that we can access publicly that a sense of public time is accessed. In the high level range of the continuum, time can progress in circular or ambiguous ways against a backdrop of eternity rather than the recent past. In this super-proximate apprehension of nature, archetypal themes from the ancient past or alien future can coexist.  Either of these can take on extraordinarily benevolent or terrifying qualities.

Like it or not, no description of the universe can possibly be considered complete if it denies the appearance of surrealities. Whether it is chemically induced or natural, the human experience has always included features which we call mystical, psychotic, paranormal, or religious. While we dream, we typically do not suspect that we are in a dreamed world until we awake into another experience which may or may not also be a dream. It is a difficult task to fairly consider these types of phenomena as they are politically charged in a way which is both powerful and invisible to us. Like the fish who spends its life swimming in a nameless plenum, it is only those who jump or are thrown out of it who can perceive the thing we call water. Sanity cannot be understood without having access to an extra-normal perspective where its surfaces are exposed. If a lack of information is the bridge between the approximate and the proximate ranges of the access continuum, then transcendental experience is the bridge between the proximate and the ultimate range of the continuum. The highest magnitudes of perception break the fourth wall, and in an involuted/Ouroboran way, provide access to the surfaces of our own access capacities.

Going back to the previous example of vision, the ultimate range of perception can be added to the list:

  • √c  – Feeling your way around in a dark room where a few features are visible.
  •  Seeing three-dimensional forms in a well lit, real world.
  • – Intuiting that rays, reflections, and rainbows reveal unseen facts about light.

It is important to get that the “²” symbolizes a meta- relation rather than a quantity (although the quantitative value may be useful as well). The idea is that seeing a rainbow is “visibility squared” because it is a visible presence which gives access to deeper levels of appreciating and understanding visibility. Seeing light as spectral, translucent images, bright reflections, shining or glowing radiance, is a category of sight that gives insight into sight. That self-transcending recursiveness is what is meant by : In the case of seeing, visible access to the nature of visibility. If we look carefully, every channel of perception includes its own self-transcendent clues. Where the camera betrays itself as a lens flare, the cable television broadcast shows its underpinnings as freezing and pixellating. Our altered states of consciousness similarly tell us personally about what it is like for consciousness to transcend personhood. This is how nature bootstraps itself, encoding keys to decode itself in every appearance.

Other sense modalities follow the same pattern as sight. The more extreme our experiences of hearing, the more we can understand about how sound and ears work. It is a curious evolutionary maladaptation that rather than having the sense organ protect itself from excessive sensation, it remains vulnerable to permanent damage. It would be strange to have a computer that would run a program to simulates something so intensely that it permanently damages its own capacity to simulate. What would be the evolutionary advantage of a map which causes deafness and blindness? This question is another example of why it makes sense to understand perception as a direct method of access rather than a side effect of information processing. We are not a program, we are an i/o port. What we call consciousness is a collection of perceptions under an umbrella of perception that is all-but imperceptible to us normally. Seeing our conscious experience from the access continuum perspective means defining ourselves on three different levels at once – as a  partition of experience within an eternal and absolute experience, as a c level ghost in a biochemical machine, and as a √c level emergence from subconscious computation:

  • √c (Semaphoric-Approximate)  – Probabilistic Pre-causality
  •  (Phoric|Morphic-Proximate) – Dualistic Free Will and Classical Causality
  • (Metaphoric-Ultimate) – Idealistic or Theistic Post-Causality

Notice that the approximate range and ultimate ranges both share a sense of uncertainty, however, where low level awareness seeks information about the immediate environment to piece together, high level awareness allows itself to be informed by that what is beyond its directly experienced environments. Between the pre-causal level of recombinatory randomness and the supernatural level of synchronistic post-causality is the dualistic level, where personal will struggles against impersonal and social forces.  From this Phoric perspective, the metaphoric super-will seems superstitious and the semaphoric un-will seems recklessly apathetic. This is another example of how perceptual lensing defines nature. From a more objective and scientific perspective, all of these appearances are equally real in their own frame of reference and equally unreal from outside of that context.

Just as high volume of sound reveals the limits of the ear, and the brightness of light exposes the limits of the eye, the limits of the human psyche at any given phase of development are discovered through psychologically intense experiences. A level of stimulation that is safe for an adult may not be tolerable for a child or baby. Alternatively, it could be true that some experiences which we could access in the early stages of our life would be too disruptive to integrate into our worldview as adults. Perhaps as we mature collectively as a species, we are acquiring more tolerance and sensitivity to the increased level of access that is becoming available to us. We should understand the dangers as well as the benefits that come with an increasingly porous frame of perception, both from access to the “supernatural” metaphoric and “unnatural”, semaphoric ranges of the continuum. Increased tolerance means that fearful reactions to both can be softened so that what was supernatural can become merely surreal and what was unnatural can be accepted as non-repulsively uncanny. Whether it is a super-mind without a physical body or a super-machine with a simulated mind, we can begin to see both as points along the universal perceptual continuum.

Craig Weinberg, Tucson 4/7/2018

Latest revision 4/18/2018

*Special Diffractivity: c², c, and √c, Multisense Diagram w/ CausalityMSR Schema 3.3Three-Phase Model of Will

access

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Three-Phase Model of Will

June 24, 2017 1 comment

Within the Multisense Realism (MSR) model, all of nature is conceived of as a continuum of experiential or aesthetic phenomena. This ‘spectrum of perceivability’ can be divided, like the visible light spectrum, into two, three, four, or millions of qualitative hues, each with their own particular properties, and each which contribute to the overall sense of the spectrum.

For this post, I’ll focus on a three-level view of the spectrum: Sub-personal, Personal, and Transpersonal. Use of the MSR neologisms ‘Semaphoric, Phoric, and Metaphoric’ may be annoying to some readers, but I think that it adds some important connections and properly places the spectrum of perceivability in a cosmological context rather than in an anthropocentric or biocentric one.

In my view, nature is composed of experiences, and the primary difference between the experiences of biological organisms (which appear as synonymous with cellular-organic bodies to each other) and experiences which appear to us as inorganic chemistry, atoms, planets, stars, etc is the scale of time and space which are involved and the effect of that scale difference on what I call perceptual lensing or eigenmorphism.

In other words, I am saying that the universe is made of experiences-within-experiences, and that the relation of any given experience to the totality of experience is a defining feature of the properties of the universe which appear most real or significant. If you are an animal, you have certain kind of experiences in which other animals are perceived as members of one’s own family, or as friends, pets, food, or pests. These categories are normally rather firm, and we do not want to eat our friends or pets, we understand that what constitutes a pet or pest in some cultures may be desirable as food in others. We understand that the palette can shift, for example, many with a vegan diet sooner or later find meat eating in general to be repulsive. This kind of shift can be expressed within the MSR model as a change in the lensing of personal gustatory awareness so that the entire class of zoological life is identified with more directly. The scope of empathy has expanded so that the all creatures with ‘two eyes and a mother’ are seen in a context of kinship rather than predation.

Enslavement is another example of how the lens of human awareness has changed. For millennia slavery was practiced in various cultures much like eating meat is practiced now. It was a fact of life that people of a different social class or race, women or children could be treated as slaves by the dominant group, or by men or adults. The scope of empathy was so contracted* by default that even members of the same human species were identified somewhere between pet and food rather than friends or family. As this scope of awareness (which is ultimately identical with empathy) expanded those who were on the leading edge of the expansion and those who were on the trailing edge began to see each other in polarized terms. There is a psychological mechanism at work which fosters the projection of negative qualities on the opposing group. In the case of 19th century American slavery, this opposition manifested in the Civil War.

Possibly all of the most divisive issues in society are about perception and how empathy is scoped. Is it an embryo or an unborn child? Are the poor part of the human family or are they pests? Should employees have rights as equals with employers or does wealth confer a right of employers to treat employees more like domesticated animals? All of these questions are contested within the lives of individuals, families, and societies and would fall under the middle range of the three tiered view of the MSR spectrum: The Phoric scope of awareness.

Phoric range: Consciousness is personal and interpersonal narrative with a clearly delineated first person subject, second person social, and third person object division. Subjective experience is intangible and difficult to categorize in a linear hierarchy. Social experience is intangible but semiotically grounded in gestures and expressions of the body. Consider the difference between the human ‘voice’ and the ‘sounds’ that we hear other animals make. The further apart the participants are from each other, the more their participation is de-personalized. Objective experience (more accurately objective-facing or public-facing experience) is totally depersonalized and presented as tangible objects rather than bodies. Tangible objects are fairly easy to stratify by time/space scale: Roughly human sized or larger animals are studied in a context of zoology. Smaller organisms and cells comprise the field of biology. As the ‘bodies’ get smaller and lives get shorter/faster relative to our own, the scope of our empathy contracts (unless perhaps if you’re a microbiologist), so that we tend to consider the physical presence of microorganisms and viruses somewhere in between bodies and objects.

Even though we see more and more evidence of objects on these sub-cellular scales behaving with seeming intelligence or responsiveness, it is difficult to think of them as beings rather than mechanical structures. Plants, even though their size can vary even more than animals, are so alien to our aesthetic sense of ourselves that they tend to be categorized in the lower empathy ranges: Food rather than friends, fiber rather than flesh. This again is all pertaining to the boundary beteween the personal or phoric range of the MSR spectrum and the semaphoric range, sub-personal. The personal view of an external sempahore is an object (morphic phenomenon). The morphic scope is a reflection within the phoric range of experiences which are perceptually qualified as impersonal but tangible. It is a range populated by solid bodies, liquids, and gas which are animated by intangible ‘forces’ or ‘energies’**. Depending on who is judging those energies and the scale and aesthetics of the object perceived, the force or energy behind the behavior of the body is presumed to be somewhere along an axis which extends from ‘person’, where full fledged subjective intent governs the body’s behavior to ‘mechanism’ where behaviors are governed by impersonal physical forces which are automatic and unintentional.

Zooming in on this boundary between sentience and automaticity, we can isolate a guiding principle in which ‘signals’ embody the translation between mechanical-morphic forms and metric-dynamic functions which are supposed to operate without sensation, and those events which are perceived with participatory qualities such as feeling, thinking, seeing, etc. While this sub-personal level is very distant from our personal scope of empathy, it is no less controversial as far as the acrimony between those who perceive no special difference between sensation and mechanical events, and those who perceive a clear dichotomy which cannot be bridged from the bottom up. To the former group, the difference between signal (semaphore) and physical function (let’s call it ‘metamorph’) is purely a semantic convention, and those who are on the far end of the latter group appear as technophobes or religious fanatics. To the latter group, the difference between feelings and functions is of the utmost significance – even to divine vs diabolical extremes. For the creationist and the anti-abortionist, human life is not divisible to mere operations of genetic objects or evolving animal species. Their perception of the animating force of human behavior is not mere stochastic computation and thermodynamics, but ‘free will’ and perhaps the sacred ‘soul’. What is going on here? Where are these ideas of supernatural influences coming from and why do they remain popular in spite of centuries of scientific enlightenment?

This is where the third level of the spectrum comes in, the metaphoric or holophoric range.

To review: Semaphoric: Consciousness on this level is seen as limited to signal-based interactions. The expectation of a capacity to send and receive ‘signs’ or ‘messages’ is an interesting place to spend some time on because it is so poorly defined within science. Electromagnetic signals are described in terms of charge or attraction/repulsion but it is at the same time presumed to be unexperienced. Computer science takes signal for granted. It is a body of knowledge which begins with an assumption that there already is hardware which has some capacity for input, output, storage, and comparison of ‘data’. Again, the phenomenal content of this process of data processing is poorly understood, and it is easy to grant proto-experiential qualities to programs when we want them to seem intelligent, or to withdraw those qualities when we want them to see them as completely controllable or programmable. Data is the semaphoric equivalent of body on the phoric level. The data side of the semaphore is the generic, syntactic, outside view of the signal. Data is a fictional ‘packet’ or ‘digit’ abstractly ‘moving’ through a series of concrete mechanical states of the physical hardware. There is widespread confusion over this, and people disagree what the relation between data, information, and experience is. MSR allows us to see the entire unit as semaphore; sensory-motive phenomena which is maximally contracted from transpersonal unity and minimally presented as sub-personal unit.

Like the vegan who no longer sees meat as food, the software developer or cognitive scientist may not see data as a fictional abstraction overlaid on top of the material conditions of electronic components, but instead as carriers of a kind of proto-phenomenal currency which can learn and understand. Data for the programmer may seem intrinsically semantic – units whose logical constraints make them building blocks of thought and knowledge that add up to more than the sum of their parts. There is a sense that data is in and of itself informative, and through additional processing can be enhanced to the status of ‘information’.

In my view, this blurring of the lines between sensation, signal, data, and information reflects the psychology of this moment in the history of human consciousness. It is the Post-Enlightenment version of superstition (if we want to be pejorative) or re-enchantment (if we want to be supportive). Where the pre-Enlightenment mind was comfortable blurring the lines between physical events and supernatural influences, the sophisticated thinker of the 21st century has no qualms about seeing human experience as a vast collection of data signals in a biochemical computer network. Where it was once popular among the most enlightened to see the work of God in our everyday life, it is now the image of the machine which has captured the imagination of professional thinkers and amateur enthusiasts alike. Everything is a ‘system’. Every human experience traces back to a cause in the body, its cells and molecules, and to the blind mechanism of their aggregate statistical evolutions.

To recap: The MSR model proposes that all of nature can be modeled meaningfully within a ‘spectrum of perceivability’ framework. This spectrum can be divided into any number of qualitative ranges, but the number of partitions used has a defining effect on the character of the spectrum as a whole. The ‘lower’, semaphoric or ‘signal’ end of the spectrum presents a world of sub-personal sensations or impulses which relate to each other as impersonal data processes. Whether this perception is valid in an objective sense, or whether it is the result of the contraction of empathy that characterizes the relation between the personal scope of awareness and its objectification of the sub-personal is a question which itself is subject to the same question. If you don’t believe that consciousness is more fundamental than matter, then you aren’t going to believe that your sensitivity has an effect on how objective phenomena are defined. If you already see personal consciousness as a function of data processing organic chemistry, then you’re not going to want to take seriously the idea that chemical bonding is driven by sensory-empathic instincts rather than mathematical law. If you’re on the other end of the psychological spectrum however, it may be difficult to imagine why anyone would even want to deny the possibility that our own consciousness is composed of authentic and irreducible of feelings.

In either case, we can probably all agree that activity on the microscopic scale seems less willful and more automatic than the activity which we participate in as human beings. Those who favor the bottom-up view see this ‘emergence’ of willful appearance as a kind of illusion, and that actually all choices we make are predetermined by the mechanics of physical conditions. Those who favor the top-down view may also see the appearance of human will as an illusion, but driven by supernatural influences and entities rather than mathematical ones. Thus, the personal range of awareness is bounded on the bottom by semaphore (sensation <> signal < || > data <> information) and on the top by what I call metaphor (fate <> synchronicity < || > intuition <> divinity).

As we move above the personal level, with its personal-subject, social groups and impersonal objects, to the transpersonal level, the significance of our personal will increases. Even though religiosity tends to impose limits on human will in the face of overwhelming influence from divine will, there is an equally powerful tendency to elevate individual human will to a super-significant role. The conscience or superego is mediator between personal self and the transpersonal. It even appears as a metaphor in cartoons as angel and devil on the shoulder.  Most religious practices stress the responsibility of the individual to align their personal will to the will of God by finding and following the better angels of conscience or suffer the consequences. The consequences range from the mild forms of disappointing reincarnation or being stuck in repeating cycles of karma to Earth shaking consequences for the entire universe (as in Scientology). From the most extreme transpersonal perspective, the personal level of will is either inflated so that every action a person takes, including what they choose to think and feel is a tribute or affront to God, and gets us closer to paradise or damnation. Simultaneously personal or it is deflated or degraded so that the entirety of human effort is pathetic and futile in the face of Higher Power.

Notice the symmetry between the quantum (extreme semaphoric or ‘hemi-morphic’) concept of ‘superposition’ and the transpersonal concept of ‘synchronicity’.  Superposition is brought in to tame the paradox of simultaneous randomness and determinism of subatomic phenomena, while synchronicity is brought into psychology as a kind of metaphoric, poetic, or acausal intrusion from the transpersonal scope of awareness to the personal. This allows a bridge natural determinism of time and transpersonal from beyond our limited awareness of time. Superposition and synchronicity are ways of describing the gateways between spacetime and the nonlocal absolute. If these gateways form the opposite extremes of the continuum of personal awareness, then the sense of free will would be the very center of that continuum. At any given moment, even though we are presented with conditions and inertial patterns which influence our will, we are also presented with opportunities to condition our will itself. We can feel within ourselves a power to oppose inertia and change conditions in the world, or we can feel completely powerless to change anything that we are experiencing.

There’s a paradox here, in that how we feel about our own willpower factors in to the feeling of how powerful our will is or can be. There is a chicken-egg relation between mood and will which tends to polarize people psychologically. Feeling that we are destined to feel depressed corresponds to a set of truths about life which are difficult to accept in the sense that they lead to nihilism and despair. Feeling that it is up to us to change how we feel so that we can improve our lives or the world corresponds to a difference set of truths about our lives which can be equally difficult to accept but in the opposite sense that they lead to risk taking and the possibility that our effort can end up causing more harm than good to ourselves and others. To be or not to be each have their strengths and weakness.

As with the other social-psychological dichotomies mentioned earlier, each side sees the other in a scope of diminished empathy; The downbeat introvert sees themselves as facing the bitter facts of mortality and the human condition with courage and honesty, while their positive-thinking counterparts are seen as deluded ninnies…intellectual lightweights who don’t have the stomach to face the existential abyss. The upbeat idealist sees themselves as heroically facing the challenge of rescuing their own life from the abyss while the realist appears to be willfully blind to their own power, and consciously or unconsciously wallow in a prison of their own making. This polarity of the phoric range of consciousness can be understood as its euphoric and dysphoric orientations. Those who have ‘mood disorders’ are familiar with these extremes and how inadequate the term ‘mood’ is to describe the totality of change in how the universe and one’s own life is presented. It is not simply that these opposing phoric ‘charges’ feel very good or bad, it is that the individual find themselves in a universe which is very good – (maybe too ‘good’), or very bad. In the current time of political transformation, we find ourselves to be drawn to align with one social polarity or another, each with its own euphoric-dysphoric signifiers and each with a separate narrative of history and the possible future. More than any time in the US since the 1960s, the questions of our personal agency and the possibilities for our future freedoms have become important. How important may be up to us individually, or we may find that fate and coincidence conspire to make them more important.

*This is not to say that slavery is not still going on, or that everyone has evolved the same level of conscience about race, gender, and age.

**I have issues with the concept of energy, but I use it here as a popular way to make the reference.

Shé Art

The Art of Shé D'Montford

astrobutterfly.wordpress.com/

Transform your life with Astrology

Be Inspired..!!

Listen to your inner self..it has all the answers..

Rain Coast Review

Thoughts on life... by Donald B. Wilson

Perfect Chaos

The Blog of Author Steven Colborne

Amecylia

Multimedia Project: Mettā Programming DNA

SHINE OF A LUCID BEING

Astral Lucid Music - Philosophy On Life, The Universe And Everything...

I can't believe it!

Problems of today, Ideas for tomorrow

Rationalising The Universe

one post at a time

Conscience and Consciousness

Academic Philosophy for a General Audience

yhousenyc.wordpress.com/

Exploring the Origins and Nature of Awareness

DNA OF GOD

BRAINSTORM- An Evolving and propitious Synergy Mode~!

Paul's Bench

Ruminations on philosophy, psychology, life

This is not Yet-Another-Paradox, This is just How-Things-Really-Are...

For all dangerous minds, your own, or ours, but not the tv shows'... ... ... ... ... ... ... How to hack human consciousness, How to defend against human-hackers, and anything in between... ... ... ... ... ...this may be regarded as a sort of dialogue for peace and plenty for a hungry planet, with no one left behind, ever... ... ... ... please note: It may behoove you more to try to prove to yourselves how we may really be a time-traveler, than to try to disprove it... ... ... ... ... ... ...Enjoy!

Creativity✒📃😍✌

“Don’t try to be different. Just be Creative. To be creative is different enough.”

Political Joint

A political blog centralized on current events

zumpoems

Zumwalt Poems Online

dhamma footsteps

all along the eightfold path