Archive
Continuum of Perceptual Access
This post is intended to bring more clarity to the philosophical view that I have named Multisense Realism. I have criticized popular contemporary views such as computationalism and physicalism because of their dependence on a primitive of information or matter that is independent of all experience. In both physicalism and computationalism, we are called upon to accept the premise that the universe is composed solely of concrete, tangible structures and/or abstract, intangible computations. Phenomena such as flavors and feelings, which are presented as neither completely tangible nor completely intangible are dismissed as illusions or emergent properties of the more fundamental dual principles. The tangible/intangible duality, while suffering precisely from the same interaction problems as substance dualism, adds the insult of preferring a relatively new and hypothetical kind of intangibility which enjoys all of our mental capacities of logic and symbolism, but which exists independently of all mental experience. When we try to pin down our notions of what information really is, the result is inevitably a circular definition which assumes phenomena can be ‘sent’ and ‘received’ from physics alone, despite the dependence of such phenomena on a preferred frame of reference and perception. When one looks at a system of mechanical operations that are deemed to cause information processing, we might ask the question “What is it that is being informed?” Is it an entity? Is there an experience or not? Are information and matter the same thing, and if so, which of them make the other appear opposite to the other? Which one makes anything ‘appear’ at all?
The answers I’ve heard and imagined seem to necessarily imply some sort info-homunculus that we call ‘the program’ or ‘the system’ to which mental experience can either be denied or assumed in an arbitrary way. This should be a warning to us that by using such an ambiguously conscious agent to explain how and why experience exists, we are committing a grave logical fallacy. To begin with, a principle that can be considered experiential or non-experiential to explain experience is like beginning with ‘moisture’ to explain the existence of water. Information theory is certainly useful to us as members of a modern civilization, however, that utility does not help us with our questions about whether experience can be generated by information or information is a quality of some categories of experience. It does not help us with the question of how the tangible and intangible interact. In our human experience, programs and systems are terms arising within the world of our thinking and understanding. In the absence of such a mental experience context, it is not clear what these terms truly refer to. Without that clarity, information processing agents are allowed them to exist in an unscientific fog as entities composed of an intangible pseudo-substance, but also with an unspecified capacity to control the behavior of tangible substances. The example often given to support this view is our everyday understanding of the difference between hardware and software. This distinction does not survive the test of anthropocentrism. Hardware is a concrete structure. Its behavior is defined in physical terms such as motion, location, and shape, or tendencies to change those properties. Software is an idea of how to design and manipulate those physical behaviors, and how the manipulation will result in our ability to perceive and interpret them as we intend. There is no physical manifestation of software, and indeed, no physical device that we use for computation has any logical entailment to experience anything remotely computational about its activities, as they are presumed to be driven by force rather than meaning. Again, we are left with an implausible dualism where the tangible and intangible are bound together by vague assumptions of unconscious intelligibility rather than by scientific explanation.
Panpsychism offers a possible a path to redemption for this crypto-dualistic worldview. It proposes that some degree of consciousness is pervasive in some or all things, however, the Combination Problem challenges us to explain how exactly micro-experiences on the molecular level build up to full-blown human consciousness. Constitutive panpsychism is the view that:
“facts about human and animal consciousness are not fundamental, but are grounded in/realized by/constituted of facts about more fundamental kinds of consciousness, e.g., facts about micro-level consciousness.”
Exactly how micro-phenomenal experiences are bound or fused together to form a larger, presumably richer macro-experience is a question that has been addressed by Hedda Hassel Mørch, who proposes that:
“mental combination can be construed as kind causal process culminating in a fusion, and show how this avoids the main difficulties with accounting for mental combination.”
In her presentation at the 2018 Science of Consciousness conference, Mørch described how Tononi’s Integrated Information Theory (IIT) might shed some light on why this fusion occurs. IIT offers the value Φ to quantify the degree of integration of information in a physical system such as a brain. IIT is a panpsychist model that predicts that any sufficiently integrated information system can or will attain consciousness. The advantage of IIT is that consciousness is allowed to develop regardless of any particular substrate it is instantiated through, but we should not overlook the fact that the physical states seem to be at least as important. We can’t build machines out of uncontained gas. There would need to be some sort of solidity property to persist in a way that could be written to, read from, and addressed reliably. In IIT, digital computers or other inorganic machines are thought to be incapable of hosting fully conscious experience, although some minimal awareness may be present.
“The theory vindicates some panpsychist intuitions – consciousness is an intrinsic, fundamental property, is graded, is common among biological organisms, and even some very simple systems have some. However, unlike panpsychism, IIT implies that not everything is conscious, for example group of individuals or feed forward networks. In sharp contrast with widespread functionalist beliefs, IIT implies that digital computers, even if their behavior were to be functionally equivalent to ours, and even if they were to run faithful simulations of the human brain, would experience next to nothing.” – Consciousness: Here, There but Not Everywhere
As I understand Mørch’s thesis, fusion occurs in a biological context when the number of causal relationships in the parts of a system that relate to the whole exceed the number of causal relationships which relate to the disconnected parts.
I think that this approach is an appropriate next step for philosophy of mind and may be useful in developing technology for AI. Information integration may be an ideal way to quantify degrees of consciousness for medical and legal purposes. It may give us ethical guidance in how synthetic and natural organisms should be treated, although I agree with some critics of IIT that the Φ value itself may be flawed. It is possible that IIT is on the right track in this instrumental sense, but that a better quantitative variable can be discovered. It is also possible that none of these approaches will help us understand what consciousness truly is, and will only confuse us further about the nature of the relation between the tangible, the intangible, and what I call the trans-tangible realm of direct perception.
What I propose here is that rather than considering a constitutive fusion of microphenomenal units into a macrophenomenal unit in which local causes and effects are consolidated into a larger locality, we should try viewing these micro and macro appearances as different orders of magnitude along a continuum of “causal lensing” or “access lensing“. Rather than physical causes of phenomenal effects, the lensing view begins with phenomenal properties as identical to existence itself. Perceptions are more like apertures which modulate access and unity between phenomenal contexts rather than mathematical processes where perceptions are manufactured by merging their isolation. To shift from a natural world of mechanical forms and forces to one of perceptual access is a serious undertaking, with far-ranging consequences that require committed attention for an extended time. Personally, it took me several years of intensive consideration and debate to complete the transition. It is a metaphysical upheaval that requires a much more objective view of both objectivity and subjectivity. Following this re-orientation, the terms ‘objective’ and ‘subjective’ themselves are suggested to be left behind, adopting instead the simpler, clearer terms such as tangible, intangible, and trans-tangible. Using this platform of phenomenal universality as the sole universal primitive, I suggest a spectrum-like continuum where ranges of phenomenal magnitude map to physical scale, qualitative intensity, and to the degree of permeability between them.
For example, on the micro/bottom scale, we would place the briefest, most disconnected sensations and impulses which can be felt, and marry them to the smallest and largest structures available in the physical universe. This connection between subatomic and cosmological scales may seem counterintuitive to our physics-bound framework, but here we can notice the aesthetic similarities between particles in a void and stars in a void. The idea here is not to suggest that the astrophysical and microphysical are identical, but that the similarity of their appearances reflects our common perceptual limitation to those largest and smallest scales of experience. These appearances may reflect a perception of objective facts, or they may be defined to some degree by particular perceptual envelope propagates reports about its own limits within itself. In the case of a star or an atom, we are looking at a report about the relationship between our own anthropocentric envelope of experience and the most distant scales of experience and finding that the overlap is similarly simple. What we see as a star or an atom may be our way of illustrating that our interaction is limited to very simple sensory-motor qualities such as ‘hold-release’ which corresponds to electromagnetic and gravitational properties of ‘push-pull’. If this view were correct, we should expect that to the extent that human lifetimes have an appearance from the astro or micro perspective, that appearance would be similarly limited to a simple, ‘points in a void’ kind of description. This is not to say that stars or atoms see us as stars or atoms, but that we should expect some analogous minimization of access across any sufficiently distant frame of perception.
Toward the middle of the spectrum, where medium-sized things like vertebrate bodies exist, I would expect that this similarity is gradually replaced by an increasing dimorphism. The difference between structures and feelings reaches its apex in the center of the spectrum for any given frame of perception. In that center, I suspect that sense presentations are maximally polarized, achieving the familiar Cartesian dualism of waking consciousness as is has been conditioned by Western society. In our case, the middle/macro level presentation is typically of an ‘interior’ which is intangible interacting with a tangible ‘exterior’ world, governed by linear causality. There are many people throughout history, however, who have reported other experiences in which time, space and subjectivity are considerably altered.
While the Western view dismisses non-ordinary states of consciousness as fraud or failures of human consciousness to report reality, I suggest that the entire category of transpersonal psychology can be understood as a logical expectation for the access continuum as it approaches the top end of the spectrum. Rather than reflecting a disabled capacity to distinguish fact from fiction, I propose that fact and fiction are, in some sense, objectively inseparable. As human beings, our body’s survival is very important to us, so such that phenomena relating to it directly would naturally occupy an important place in our personal experience. This should not be presumed to be the case for nature as a whole. Transpersonal experience may reflect a fairly accurate rendering of any given perceptual frame of reference which attains a sufficiently high level of sensitivity. With an access continuum model, high sensitivity corresponds to dilated apertures of perception (a la Huxley), and consequently allows more permeability across perceptual contexts, as well as permitting access to more distant scales of perceptual phenomena.
The Jungian concept of archetypes and collective unconscious should be considered useful intuitions here, as the recurring, cross-cultural nature of myth and dreams suggest access to phenomena which seem to blur or reveal common themes across many separate times and places. If our personal experience is dominated by a time-bound subject in a space-bound world, transpersonal experience seems to play with those boundaries in surreal ways. If personal experiences of time are measured with a clock, transpersonal time might be symbolized by Dali’s melting clocks. If our ordinary personal experience of strictly segregated facts and fictions occupies the robust center of the perceptual continuum, the higher degrees of access corresponds to a dissolving of those separations and the introduction of more animated and spontaneous appearances. As the mid-spectrum ‘proximate’ range gives way to an increasingly ‘ultimate’ top range, the experience of merging of times, places, subjects, objects, facts, and fiction may not so much be a hallucination as a profound insight into the limits of any given frame of perception. To perceive in the transpersonal band is to experience the bending and breaking of the personal envelope of perception so that its own limits are revealed. Where the West sees psychological confusion, the East sees cosmic fusion. In the access continuum view, both Eastern and Western view refer to the same thing. The transpersonal opportunity is identical to the personal crisis.
This may sound like “word salad” to some, or God to others, but what I am trying to describe is a departure from both Western and Eastern metaphysical models. It seems necessary to introduce new terms to define these new concepts. To describe how causality itself changes under different scales or magnitudes of perception, I use the term causal lensing. By this I mean to say that the way things happen in nature changes according to the magnitude of “perceptual access”. With the term ‘perceptual access’, I hope to break from the Western view of phenomenal experience as illusory or emergent, as well as breaking from the Eastern view of physical realism as illusory. Both the tangible and the intangible phenomena of nature are defined here as appearances within the larger continuum of perceptual access…a continuum in which all qualitative extremes are united and divided.
In order to unite and transcend both the bottom-up and top-down causality frameworks, I draw on some concepts from special relativity. The first idea that I borrow is the notion of an absolute maximum velocity, which I suggest is a sign that light’s constancy of speed is only one symptom of the deeper role of c. Understanding ‘light speed’ as an oversimplification of how perception across multiple scales of access works, c becomes a perceptual constant instead of just a velocity. When we measure the speed of light, we may be measuring not only the distance traveled by a particle while a clock ticks, but also the latency associated with translating one scale of perception into another.
The second idea borrowed from relativity is the Lorentz transformation. In the same way that the special relativity links acceleration to time dilation and length contraction, the proposed causal lensing schema transforms along causality itself along a continuum. This continuum ranges from what I want to call ultimate causes (with highest saturation of phenomenal intensity and access), to proximate causes (something like the macrophenomenal units), to ‘approximate causes’. When we perceive in terms of proximate causality, space and time are graphed as perpendicular axes and c is the massless constant linking the space axis to the time axis. When we look for light in distant frames of perception, I suggest that times and spaces break down (√c ) or fuse together (c²). In this way, access to realism and richness of experience can be calibrated as degrees of access rather than particles or waves in spacetime. What we have called particles on the microphysical scale should not be conceived necessarily as microphenomenal units, but more like phenomenal fragments or disunities that anticipate integration from a higher level of perception. In other words, the ‘quantum world’ has no existence of its own, but rather supplies ingredients for a higher level, macrophenomenal sense experience. The bottom level of any given frame of perception would be characterized by these properties of anticipatory disunity or macrophenomenal pre-coherence. The middle level of perception features whole, coherent Units of experience. The top or meta level of perception features Super-Unifying themes and synchronistic, poetic causality.
To be clear, what I propose here is that perceptual access is existence. This is an updated form of Berkeley’s “Esse est percipi” doctrine, where “to be is to be perceived” which does not presume perception to be a verb. In the access continuum view, aesthetic phenomena precede all distinctions and boundaries, so that even the assumption of a perceiving subject is discarded. Instead of requiring a divine perceiver, a super-subject becomes an appearance arising from the relation between ultimate and proximate ranges of perception. Subjectivity and objectivity are conceived of as mutually arising qualities within the highly dimorphic mid-range of the perceptual spectrum. This spectrum model, while honoring the intuitions of Idealists such as Berkeley, is intended to provide the beginnings of a plausible perception-based cosmology, with natural support from both Western Science and Eastern Philosophy.
Some examples of the perceptual spectrum:
In the case of vision, whether we lack visual acuity or sufficient light, the experience of not being able to see well can be characterized as a presentation of disconnected features. The all-but-blind seer is forced to approximate a larger, more meaningful percept from bits and pieces, so that a proximate percept (stuff happening here and now that a living organism cares about) can be substituted. Someone who is completely blind may use a cane to touch and feel objects in their path. This does not yield a visible image but it does fill in some gaps between the approximate level of perceptual access to the proximate level. This process, I suggest, is roughly what we are seeing in the crossing over from quantum mechanics to classical mechanics. Beneath the classical limit there is approximating causality based on probabilistic computation. Beyond the classical limit causality takes on deterministic causality appearances in the ‘Morphic‘ externalization and will-centered causality appearances in the ‘Phoric‘ interiorization.

In other words, I am suggesting a reinterpretation of quantum mechanics so that it is understood to be an appearance which reflects the way that a limited part of nature guesses about the nature of its own limitation.
In this least-accessible (Sempahoric, approximate) range of consciousness, awareness is so impoverished that even a single experience is fragmented into ephemeral signals which require additional perception to fully ‘exist’. What we see as the confounding nature of QM may be an accurate presentation of the conditions of mystery which are required to manifest multiple meaningful experiences in many different frames of perception. Further, this different interpretation of QM re-assigns the world of particle physics so that it no longer is presumed to be the fabric of the universe, but is instead seen as equivalent to the ‘infra-red’ end of a universal perceptual spectrum, no more or less real than waking life or a mystical vision. Beginning with a perceptual spectrum as our metaphysical and physical absolute, light becomes inseparable from sight, and invisible ranges of electromagnetism are perceptual modes which human beings have no direct access to. If this view is on the right track, seeing light as literally composed of photons would be category error that mistakes an appearance of approximation and disunity for ‘proximated’ or formal units. It seems possible that this mistake is to blame for contradictory entities in quantum theory such as ‘particle-waves’. I am suggesting that the reality of illumination is closer to what an artist does in a painting to suggest light – that is, using lighter colors of paint to show a brightening of a part of the visual field. The expectation of photons composing beams of light in space is, on this view, a useful but misguided confusion. There may be no free-standing stream of pseudo-particles in space, but instead, there is an intrinsically perceptual relation which is defined by the modality and magnitude of its access. I suggest that the photon, as well as the electromagnetic field, are more inventions than discoveries, and may ultimately be replaced with an access modulation theory. Special relativity was on the right track, but it didn’t go far enough as to identify light as an example of how perception defines the the proximate layer of the universe through optical-visibile spatiotemporalization.
Again, I understand the danger here of ‘word salad’ accusations and the over-use of neologisms, but please bear in mind that my intention here is to push the envelope of understanding to the limit, not to assert an academic certainty. This is not a theory or hypothesis, this is an informal conjecture which seems promising to me as a path for others to explore and discover. With that, let us return to the example of poor sight to illustrate the “approximate”, bottom range of the perceptual continuum. In visual terms, disconnected features such as brightness, contrast, color, and saturation should be understood to be of a wholly different order than a fully realized image. There is no ’emergence’ in the access continuum model. Looking at this screen, we are not seeing a fusion of color pixels, but rater we are seeing through the pixel level. The fully realized visual experience (proximate level) does not reduce to fragments but has images as its irreducible units. Like the blind person using a cane, an algorithm can match invisible statistical clues about the images we see to names that have been provided, but there is no spontaneous visual experience being generated. Access to images through pixels is only possible from the higher magnitude of visual perception. From the higher level, the criticality between the low level visible pixels and images is perhaps driven by a bottom-up (Mørchian) fusion, but only because there are also top-down, center-out, and periphery-in modes of access available. Without those non-local contexts and information sources, there is no fusion. Rather than images emerging from information, they are made available through a removal of resistance to their access. There may be a hint of this in the fact that when we open our eyes in the light, one type of neurochemical activity known as ‘dark current’ ceases. In effect, sight begins with unseeing darkness.
Part 2: The Proximate Range of the Access Continuum
At the risk of injecting even more abstruse content (why stop now?), I want to discuss the tripartite spectrum model (approximate, proximate, and ultimate) and the operators √c, c, and c²*. In those previous articles, I offered a way of thinking about causality in which binary themes such as position|momentum, and contextuality|entanglement on the quantum level may be symptoms of perceptual limitation rather than legitimate features of a microphysical world. The first part of this article introduces √c as the perceptual constant on the approximate (low level) of the spectrum. I suggest that while photons, which would be the √c level fragments of universal visibility, require additional information to provide image-like pattern recognition, the actual perception of the image gestalt seems to be an irreducibly c (proximate, mid-level) phenomenon. By this, I mean that judging from the disparity between natural image perception and artificial image recognition, as revealed by adversarial images that are nearly imperceptible to humans, we cannot assume a parsimonious emergence of images from computed statistics. There seems to be no mechanical entailment for the information relating bits of information to one another that would level up to an aesthetically unified visible image. This is part of what I try to point out in my TSC 2018 presentation, The Hard Problem of Signaling.
Becuase different ranges of the perceptual spectrum are levels of access rather than states of a constitutive panpsychism, there is no reason to be afraid of Dualism as a legitimate underlying theme for the middle range. With the understanding that the middle range is only the most robust type of perceptual access and not an assertion of naive realism, we are free to redeem some aspects of the Cartesian intuition. The duality seen by Descartes, Galileo, and Locke, should not be dismissed as naive misunderstandings from a pre-scientific era, but as the literal ‘common-sense’ scope of our anthropic frame of perception. This naive scope, while unfashionable after the 19th century, is no less real than the competing ranges of sense. Just because we are no longer impressed by the appearance of res cogitans and res extensa does not mean that they are not impressive. Thinking about a cogitans-like and extensa-like duality as diametrically filtered versions of a ‘res aesthetica’ continuum works for me. The fact that we can detect phenomena that defy this duality does not make the duality false, it only means that duality isn’t the whole story. Because mid-level perception has a sample rate that is slower than the bottom range, we have been seduced into privileging that bottom range as more real. This to me is not a scientific conclusion, but a sentimental fascination with transcending the limits of our direct experience. It is exciting to think that the universe we see is ‘really’ composed of exotic Planck scale phenomena, but it makes more sense in my view to see the different scales of perception as parallel modes of access. Because time itself is being created and lensed within every scale of perception, it would be more scientific avoid assigning preference frame to the bottom scale. The Access Continuum model restores some features Dualism to what seems to me to be its proper place: as a simple and sensible map of the typical waking experience. A sober, sane, adult human being in the Western conditioned mindset experiences nature as a set of immaterial thoughts and feelings inside a world of bodies in motion. When we say that appearances of Dualism are illusion, we impose an unscientific prejudice against our own native epistemology. We are so anxious to leave the pre-scientific world behind that we would cheat at our own game. To chase the dream of perfect control and knowledge, we have relegated ourselves to a causally irrelevant epiphenomenon.
To sum up, so far in this view, I have proposed
- a universe of intrinsically perceptual phenomena in which some frames of perception are more localized, that is, more spatially, temporally, and perceptually impermeable, than others.
- Those frames of perception which are more isolated are more aesthetically impoverished so that in the most impermeable modes, realism itself is cleaved into unreal conjugate pairs.
- This unreality of disunited probabilities is what we see in poor perceptual conditions and in quantum theory. I call these pairs semaphores, and the degree of perceptual magnitude they embody I call the semaphoric or approximate range of the spectrum.
- The distance between semaphores is proposed to be characterized by uncertainty and incompleteness. In a semaphoric frame of visible perception, possibilities of pixels and possible connections between them do not appear as images, but to a seer of images, they hint at the location of an image which can be accessed.
- This idea of sensitivity and presentation as doors of experience rather sense data to be fused into a phenomenal illusion is the most important piece of the whole model. I think that it provides a much-needed bridge between relativity, quantum mechanics, and the entire canon of Western and Eastern philosophy.
- The distinction between reality and illusion, or sanity and insanity is itself only relevant and available within a particular (proximate) range of awareness. In the approximate and ultimate frames of perception, such distinctions may not be appropriate. Reality is not subjective or relative, but it is limited to the mid-range scope of the total continuum of access. All perceptions are ultimately ‘real’ in the top level, trans-local sense and ‘illusion’ in the approximate, pre-local sense.
- It is in the proximate, middle range of perception where the vertical continuum of access stretches out horizontally so that perception is lensed into a duality between mechanical-tangible-object realism and phenomenal-intangible-subject realism. It is through the lensing that the extreme vantage points perceive each other as unreal, naive, or insane. Whether we are born to personally identify with the realism of the tangible or intangible seems to also hang in the balance between pre-determined fate and voluntary participation. Choosing our existential anchoring is like confronting the ‘blue dress’ or ‘duck-rabbit’ ambiguous image. Once we attach to the sense of a particular orientation, the competing orientation becomes nonsense.
Part 3: The Ultimate Range of the Access Continuum
Once the reader feels that they have a good grasp of the above ideas of quantum and classical mechanics as approximate and proximate ranges of a universal perceptual continuum, this next section can be a guide to the other half of the conjecture. I say it can be a guide because I suspect that it is up to the reader to collaborate directly with the process. Unlike a mathematical proof, understanding of the upper half of the continuum is not confined to the intellect. For those who are anchored strongly in our inherited worldviews, the ideas presented here will be received as an attack on science or religion. In my view, I am not here to convince anyone or prove anything, I am here to share a ‘big picture’ understanding that may only be possible to glimpse for some people at some times. For those who cannot or will not be able to access to this understanding at this time, I apologize sincerely. As someone who grew up with the consensus scientific view as a given fact, I understand that this writing and the writer appear either ridiculously ignorant or insane. I would try to explain that this appearance too is actually supportive of the perceptual lensing model that I’m laying out, but this would only add to feelings of distrust and anger. For those who have the patience and the interest, we can proceed to the final part of the access continuum conjecture.
I have so far described the bottom end of the access continuum as being characterized by disconnected fragments and probabilistic guessing, and the middle range as a dualistic juxtaposition of morphic forms and ‘phoric’ experiences. In the higher range of the continuum perceptual apertures are opened to the presence of supersaturated aesthetics which transcend and transform the ordinary. Phenomena in this range seem to freely pass across the subject-object barrier. If c is the perceptual constant in which public space and private time are diametrically opposed, then the transpersonal constant which corresponds to the fusion of multiple places and times can be thought of as c². We can construct physical clocks out of objects, but these actually only give us samples of how objects change in public space. The sense of time must be inferred by our reasoning so that a dimension of linear time is imagined as connecting those public changes. This may seem solipsistic – that I am suggesting that time isn’t objectively real. This would be true if we assumed, as Berkeley did, that perception necessarily implies a perceiver. Because the view I’m proposing assumes that perception is absolute, the association of time with privacy and space with publicity does not threaten realism. Think of it like depth perception. In one sense we see a fusion of two separate two-dimensional images. In another sense, we use a single binocular set of optical sensors to give us access to three-dimensional vision. Applied to time, we perceive an exteriorized world in which is relatively static and we perceive an interiorized world-less-ness in which all remembered experiences are collected. It is by attaching our personal sense of narrative causality to the snapshots of experience that we can access publicly that a sense of public time is accessed. In the high level range of the continuum, time can progress in circular or ambiguous ways against a backdrop of eternity rather than the recent past. In this super-proximate apprehension of nature, archetypal themes from the ancient past or alien future can coexist. Either of these can take on extraordinarily benevolent or terrifying qualities.
Like it or not, no description of the universe can possibly be considered complete if it denies the appearance of surrealities. Whether it is chemically induced or natural, the human experience has always included features which we call mystical, psychotic, paranormal, or religious. While we dream, we typically do not suspect that we are in a dreamed world until we awake into another experience which may or may not also be a dream. It is a difficult task to fairly consider these types of phenomena as they are politically charged in a way which is both powerful and invisible to us. Like the fish who spends its life swimming in a nameless plenum, it is only those who jump or are thrown out of it who can perceive the thing we call water. Sanity cannot be understood without having access to an extra-normal perspective where its surfaces are exposed. If a lack of information is the bridge between the approximate and the proximate ranges of the access continuum, then transcendental experience is the bridge between the proximate and the ultimate range of the continuum. The highest magnitudes of perception break the fourth wall, and in an involuted/Ouroboran way, provide access to the surfaces of our own access capacities.
Going back to the previous example of vision, the ultimate range of perception can be added to the list:
- √c – Feeling your way around in a dark room where a few features are visible.
- c – Seeing three-dimensional forms in a well lit, real world.
- c² – Intuiting that rays, reflections, and rainbows reveal unseen facts about light.
It is important to get that the “²” symbolizes a meta- relation rather than a quantity (although the quantitative value may be useful as well). The idea is that seeing a rainbow is “visibility squared” because it is a visible presence which gives access to deeper levels of appreciating and understanding visibility. Seeing light as spectral, translucent images, bright reflections, shining or glowing radiance, is a category of sight that gives insight into sight. That self-transcending recursiveness is what is meant by c²: In the case of seeing, visible access to the nature of visibility. If we look carefully, every channel of perception includes its own self-transcendent clues. Where the camera betrays itself as a lens flare, the cable television broadcast shows its underpinnings as freezing and pixellating. Our altered states of consciousness similarly tell us personally about what it is like for consciousness to transcend personhood. This is how nature bootstraps itself, encoding keys to decode itself in every appearance.
Other sense modalities follow the same pattern as sight. The more extreme our experiences of hearing, the more we can understand about how sound and ears work. It is a curious evolutionary maladaptation that rather than having the sense organ protect itself from excessive sensation, it remains vulnerable to permanent damage. It would be strange to have a computer that would run a program to simulates something so intensely that it permanently damages its own capacity to simulate. What would be the evolutionary advantage of a map which causes deafness and blindness? This question is another example of why it makes sense to understand perception as a direct method of access rather than a side effect of information processing. We are not a program, we are an i/o port. What we call consciousness is a collection of perceptions under an umbrella of perception that is all-but imperceptible to us normally. Seeing our conscious experience from the access continuum perspective means defining ourselves on three different levels at once – as a c² partition of experience within an eternal and absolute experience, as a c level ghost in a biochemical machine, and as a √c level emergence from subconscious computation:
- √c – (Semaphoric-Approximate) – Probabilistic Pre-causality
- c – (Phoric|Morphic-Proximate) – Dualistic Free Will and Classical Causality
- c² – (Metaphoric-Ultimate) – Idealistic or Theistic Post-Causality
Notice that the approximate range and ultimate ranges both share a sense of uncertainty, however, where low level awareness seeks information about the immediate environment to piece together, high level awareness allows itself to be informed by that what is beyond its directly experienced environments. Between the pre-causal level of recombinatory randomness and the supernatural level of synchronistic post-causality is the dualistic level, where personal will struggles against impersonal and social forces. From this Phoric perspective, the metaphoric super-will seems superstitious and the semaphoric un-will seems recklessly apathetic. This is another example of how perceptual lensing defines nature. From a more objective and scientific perspective, all of these appearances are equally real in their own frame of reference and equally unreal from outside of that context.
Just as high volume of sound reveals the limits of the ear, and the brightness of light exposes the limits of the eye, the limits of the human psyche at any given phase of development are discovered through psychologically intense experiences. A level of stimulation that is safe for an adult may not be tolerable for a child or baby. Alternatively, it could be true that some experiences which we could access in the early stages of our life would be too disruptive to integrate into our worldview as adults. Perhaps as we mature collectively as a species, we are acquiring more tolerance and sensitivity to the increased level of access that is becoming available to us. We should understand the dangers as well as the benefits that come with an increasingly porous frame of perception, both from access to the “supernatural” metaphoric and “unnatural”, semaphoric ranges of the continuum. Increased tolerance means that fearful reactions to both can be softened so that what was supernatural can become merely surreal and what was unnatural can be accepted as non-repulsively uncanny. Whether it is a super-mind without a physical body or a super-machine with a simulated mind, we can begin to see both as points along the universal perceptual continuum.
Craig Weinberg, Tucson 4/7/2018
Latest revision 4/18/2018
*Special Diffractivity: c², c, and √c, Multisense Diagram w/ Causality, MSR Schema 3.3, Three-Phase Model of Will

The Hard Problem of Signaling
Download:
PDF – The Hard Problem of Signaling TSC2018

If it eats like a duck and poops like a duck, does it know what direction to fly in the Winter? In 1739, Jacque de Vaucanson unveiled Canard Digérateur (Digesting Duck), a life-size mechanical duck which appeared to eat kernels of grain, then metabolize and defecate them.³Vaucanson describes the duck’s innards as a small “chemical laboratory.” But it was a hoax: Food was collected in one container, and pre-made breadcrumb ‘feces’ were dispensed from a second, separate container. On the surface, Vaucanson’s Digesting Duck appeared to be a compelling reconstruction of a real duck. The analogy to AGI here is not to suggest it is possible that the appearance of an intelligent machine is a mere trick, but that the issue of artifice may play a much more crucial role in defining the phenomenon of subjectivity than it will appear to in observing the biological objects associated with our consciousness in particular. Consciousness itself, as the ultimate source of authenticity, may have no substitute.
If a doll can be made to shed tears without feeling sad, there is no reason to rule out the possibility of constructing an unfeeling machine which can output enough human-like behaviors to pass an arbitrarily sophisticated Turing Test. A test itself is a method of objectifying and making tangible some question that we have.Can we really expect the most intangible and subjective aspects of consciousness to render themselves tangible using methods designed for objectivity? When we view the world through a lens — a microscope, language, the human body — the lens does not disappear, and what we see should tell us as much, if not more, about the lens and the seeing as it does about the world. If math and physics reveal to us a world in which we don’t really exist, and what does exist are skeletal simulating ephemera, it may be because it is the nature of math and physics to simulate and ephemeralize.The very act of reduction imposed intentionally by quantifying approaches may increasingly feed back on its own image the further we get from our native scope of direct perception. In creating intelligence simulation machines we are investing in the most distanced and generic surface appearances of nature that we can access and using them to replace our most intimate and proprietary depths. An impressive undertaking, to be sure, but we should be vigilant about letting our expectations and assumptions blind us.Not overlooking the looking glass means paying attention in our methods to which perceptual capacities we are extending and which we are ignoring. Creating machines that walk like a duck and quack like a duck may be enough to fool even other ducks, but that doesn’t mean that the most essential aspects of a duck are walking and quacking. It may be the case that subjective consciousness cannot be engineered from the outside-in, so that putting hardware and software together to create a person would be a bit like trying to recreate World War II with uniforms and actors. A person, like a historical event may only arise in a single, unrepeatable historical context.Our human experience caries with it a history of generations of organisms and organic events, not just as biological recapitulations, but as a continuous enrichment of sensory affect and participation. Humanity’s path diverged from the inorganic path long, long ago, and it may take just as long for any inorganic substance to be usable to host the types of experience available to us, if ever. The human qualities of consciousness may not develop in any context other than that of directly experiencing the life of a human body in a human society.
(QUOKKA)
Information does not physically exist
Alfred Korzybski famously said “the map is not the territory”. To the extent that this is true, it should be understood to reveal that “information is not physics”. If there is a mapping function, there is no reason to consider it part of physics, and in fact that convention comes from an assumption of physicalism rather than a discovery of physical maps. There is no valid hypothesis of a physical mechanism for one elemental phenomenon or event to begin to signify another as a “map”. Physical phenomena include ‘formations’ but there is nothing physical which could or should transform them ‘in’ to anything other than different formations.
A bit or elementary unit of information has been defined as ‘a difference that makes a difference’. While physical phenomena seem *to us* to make a difference, it would be anthropomorphizing to presume that they are different or make a difference to each other. Difference and making a difference seem to depend on some capacity for detection, discernment, comparison, and evaluation. These seem to be features of conscious sense and sense making rather than physical cause and effect. The more complete context of the quote about a difference which makes a difference has to do with neural pathways and an implicit readiness to be triggered.
In Bateson’s paper, he says “In fact, what we mean by information—the elementary unit of information—is a difference which makes a difference, and it is able to make a difference because the neural pathways along which it travels and is continually transformed are themselves provided with energy. The pathways are ready to be triggered. We may even say that the question is already implicit in them.” In my view this ‘readiness’ is a projection of non-physical properties of sense and sense making onto physical structures and functions. If there are implicit ‘questions’ on the neural level, I suggest that they cannot be ‘in them’ physically, and the ‘interiority’ of the nervous system or other information processors is figurative rather than literal.
My working hypothesis is that information is produced by sense-making, which in turn is dependent upon more elemental capacities for sense experience. Our human experience is a complex hybrid of sensations which seem to us to be embodied through biochemistry and sense-making experiences which seem to map intangible perceptions outside of those tangible biochemical mechanisms. The gap between the biochemical sensor territories and the intangible maps we call sensations are a miniaturized view of the same gap that exists at the body-mind level.
Tangibility itself may not be an ontological fact, but rather a property that emerges from the nesting of sense experience. There may be no physical territory or abstract maps, only sense-making experiences of sense experiences. There may be a common factor which links concrete territories and abstract maps, however. The common factor cannot be limited to the concrete/abstract dichotomy, but it must be able to generate those qualities which appear dichotomous in that way. To make this common factor universal rather than personal, qualia or sense experience could be considered an absolute ground of being. George Berkeley said “Esse est percipi (To be is to be perceived)”, implying that perception is the fundamental fabric of existence. Berkeley’s idealism conceived of God as the ultimate perceiver whose perceptions comprise all being, however it may be that the perceiver-perceived dichotomy is itself a qualitative distinction which relies on an absolute foundation of ‘sense’ that can be called ‘pansense’ or ‘universal qualia’.
In personal experience, the appearance of qualities is known by the philosophical term ‘qualia’ but can also be understood as received sensations, perceptions, feelings, thoughts, awareness and consciousness. Consciousness can be understood as ‘the awareness of awareness’, while awareness can be ‘the perception of perception’.Typically we experience the perceiver-perceived dichotomy, however practitioners of advanced meditation techniques and experiencers of mystical states of consciousness report a quality of perceiverlessness which defies our expectation of perceiver-hood as a defining or even necessary element of perception. This could be a clue that transpersonal awareness transcends distinction itself, providing a universality which is both unifying, diversifying, and re-unifying. Under the idea of pansense, God could either exist or not exist, or both, but God’s existence would either have to be identical with or subordinate to pensense. God cannot be unconscious and even God cannot create his own consciousness.
It could be thought that making the category of perception absolute makes it just as meaningless as calling it physical, however the term ‘perception’ has a meaning even in an absolute sense in that it positively asserts the presence of experience, whereas the term ‘physical’ is more generic and meaningless. Physical could be rehabilitated as a term which refers to tangible geometric structures encountered directly or indirectly during waking consciousness. Intangible forces and fields should be understood to be abstract maps of metaphysical influences on physical appearances. What we see as biology, chemistry, and physics may in fact be part of a map in which a psychological sense experience makes sense of other sense experiences by progressively truncating their associated microphenomenal content.
Information is associated with Entropy, but entropy ultimately isn’t purely physical either. The association between information and entropy is metaphorical rather than literal. The term ‘entropy’ is used in many different contexts with varying degrees of rigor. The connection between information entropy and thermodynamic entropy comes from statistical mechanics. Similar statistical mechanical formulas can be applied to both the probability of physical microstates (Boltzmann, Gibbs) and the probability of ‘messages’ (Shannon), however probability derives from our conscious desire to count and predict, not from that which is being counted and predicted.
“Gain in entropy always means loss of information, and nothing more”. To be more concrete, in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes–no questions needed to be answered in order to fully specify the microstate, given that we know the macrostate.” - Wikipedia
Information can be considered negentropy also:
“Shannon considers the uncertainty in the message at its source, whereas Brillouin considers it at the destination” – physics.stackexchange.com
Thermodynamic entropy can be surprising in the sense that it becomes more difficult to predict the microstate of any individual particle, but unsurprising in the sense that the overall appearance of equilibrium is both a predictable, unsurprising conclusion and it is an appearance which implies the loss of potential to generate novelty or surprise. Also, surprise is not a physical condition.
Heat death is a cosmological end game scenario which is maximally entropic in thermodynamic terms but lacks any potential for novelty or surprise. If information is surprise, then high information would correlate to high thermodynamic negentropy. The Big Bang is a cosmological creation scenario which follows from a state of minimal entropy in which novelty and surprise are also lacking until the Big Bang occurs. If information is surprise, then low information would correlate to high thermodynamic negentropy.
The qualification of ‘physical’ has evolved and perhaps dissolved to a point where it threatens to lose all meaning. In the absence of a positive assertion of tangible ‘stuff’ which does not take tangibility itself for granted, the modern sense of physical has largely blurred the difference between the abstract and concrete, mathematical theory and phenomenal effects, and overlooks the significance of that blurring. Considering physical a category of perceptions gives meaning to both categories in that nature is conceived as being intrinsically experiential with physical experiences being those in which the participatory element is masked or alienated by a qualitative perceiver-subject/perceived-object sense of distinction. The physical is perceived by the subject which perceives itself to possess a participatory subjectivity that the object lacks.
Information depends on a capacity to create (write) and detect (read) contrasts between higher and lower entropy. In that sense it is meta-entropic and either the high or low entropy state can be foregrounded as signal or backgrounded as noise. The absence of both signal and noise on one level can also be information, and thus a signal, on another level. What constitutes a signal at in the most direct frame of reference is defined by the meta-signifying capacity of “sense” to deliver sense-experience. If there is no sense experience, there is nothing to signify or make-sense-of. If there is no sense-making experience, then there is nothing to do with the sense of contrasting qualities to make them informative.
The principle of causal closure in physics, would, if true, prevent any sort of ‘input’ or receptivity. Physical activity reduces to chains of causality which are defined by spatiotemporal succession. A physical effect differs from a physical cause only in that the cause precedes the effect. Physical causality therefore is a succession of effects or outputs acting on each other, so that any sense of inputs or affect on to physics would be an anthropomorphic projection.
The lack of acknowlegement of input/affect as a fundamental requirement for natural phenomena is an oversight that may arise from a consensus of psychological bias toward stereotypically ‘masculine’ modes of analysis and away from ‘feminine’ modes of empathy. Ideas such as Imprinted Brain Theory, Autistic-Psychotic spectrum, and Empathizing-Systemizing theory provide a starting point for inquiries into the role that overrepresentation of masculine perspectives in math, physics, and engineering play in the development of formal theory and informal political influence in the academic adoption of theories.
Criticisms? Support? Join the debate on Kialo.
Computation as Anti-Holos

Here is a technical sketch of how all of nature could arise from a foundation which is ‘aesthetic’ rather than physical or informational. I conceive of the key difference between the aesthetic and the physical or informational (both anesthetic) is that an aesthetic phenomenon is intrinsically and irreducibly experiential. This is a semi-neologistic use of the term aesthetic, used only to designate the presence of phenomena which is not only detected but is identical to detection. A dream is uncontroversially aesthetic in this sense, however, because our waking experience is also predicated entirely upon sensory, perceptual, and cognitive conditioning, we can never personally encounter any phenomenon which is not aesthetic. Aesthetics here is being used in a universal way and should not be conflated with the common usage of the term in philosophy or art, since that usage is specific to human psychology and relates primarily to beauty. There is a connection between the special case of human aesthetic sense and the general sense of aesthetic used here, but that’s a topic for another time. For now, back to the notion of the ground of being as a universal phenomenon which is aesthetic rather than anesthetic-mechanical (physics or computation).
I have described this aesthetic foundation with various names including Primordial Identity Pansensitivity or Pansense. Some conceive of something like it called Nondual Fundamental Awareness. For this post I’ll call it Holos: The absolute totality of all sensation, perception, awareness, and consciousness in which even distinctions such as object and subject are transcended.
I propose that our universe is a product of a method by which Holos (which is sense beneath the emergent dichotomy of sensor-sensed) is perpetually modulating its own sensitivity into novel and contrasting aspects. In this way Holos can be understood as a universal spectrum of perceivability which nests or masks itself into sub-spectra, such as visibility, audibility, tangibility, emotion, cogitation, etc, as well into quantifiable metrics of magnitude.
The masking effect of sense modulation is, in this hypothesis the underlying phenomenon which has been measured as entropy. Thermodynamic entropy and information entropy alike are not possible without a sensory capacity for discernment between qualities of order/certainty/completeness (reflecting holos/wholeness) and the absence of those qualities (reflecting the masking of holos). Entropy can be understood as the masking of perceptual completeness within any given instance of perception (including measurement perceptions). Because entropy is the masking of the completeness of holos, it is a division which masks division. Think of how the borders of our visual field present an evanescent, invisible or contrast-less boundary to visibility of visual boundaries and contrasts. Because holos unites sense, entropy divides sense, including the sense of division, resulting in the stereotypical features of entropy – equilibrium or near equilibrium of insignificant fluctuations, uncertainty, morphological decay to generic forms and recursive functions, etc. Entropy can be understood as the universal sense of insensitivity. The idea of nothingness refers to an impossible state of permanent unperceivability, however just as absolute darkness is not the same as invisibility, even descriptors of perceptual absence such stillness, silence, vacuum are contingent upon a comparison with their opposites. Nothingness is still a concept within consciousness rather than a thing which can actually exist on its own.
Taking this idea further, it is proposed that the division of sense via entropy-insensitivity has a kind of dual effect. Where holos is suppressed, a black-hole like event-horizon of hyper-perceivability is also present. There is a conservation principle by which entropic masking must also residuate a hypertrophied entity-hood of sense experience: A sign, or semaphore, aka unit of information/negentropy.
In dialectic terms, Holos/sense is the universal, absolute thesis of unity, which contains its own antithesis of entropy-negentropy. The absolute essence of the negentropy-entropy dialectic would be expressed in aesthetic duals such as particle-void, foreground-background, signal-noise. The aesthetic-anesthetic dual amplifies the object-like qualities of the foregrounded sensation such that it is supersaturated with temporary super-completeness, making it a potential ‘signal’ or ‘sign’…a surface of condensed-but-collapsed semantics or ‘phoria’ into ‘semaphoria’, aka, syntactic elements. I call the progressive formalizing of unified holos toward graphic units ‘diffractivity’. The result of diffractivity is that the holos implements a graphic-morphic appearance protocol within itself, which we call space-time, and which is used to anchor and guide the interaction of the entropic, exterior of experience. The interior of complex experiences are guided by the opposite, transformal sense of hierarchy-by-significance’. Significance is another common term which I am partially hijacking for use in a more specific way as the saturation of aesthetic qualities, and the persistence of any given experience within a multitude of other experiences.
To recap, the conjecture is that all of nature arises by, through, for, and within an aesthetic foundation named ‘holos’. Through a redistribution of its own sensitivity, holos masks its unity property into self-masking/unmasking ‘units’ which we call ‘experiences’ or ‘events’. The ability recall multiple experiences and to imagine new experiences, and to understand the relation between them is what we call ‘time’.
Within more complex experiences, the entropic property which divides experience into temporal sections can reunite with other, parallel complex experiences in a ‘back to back’ topology. In this mode of tactile disconnection, the potential for re-connection of the disconnected ends of experiences is re-presented what we call ‘objects in space’ or ‘distance between points’, aka geometry. By marrying the graphed, geometric formality of entropy with the algebraic, re-collecting formality of sequence, we arrive at algorithm or computation. Computation is not a ‘real phenomenon’ but a projection of the sense of quantity (an aesthetic sense just like any other) onto a distanced ‘object’ of perception.
Physics in this view is understood to be the inverted reflection and echo of experience which has been ‘spaced and timed’. Computation is the inversion of physics – the anesthetic void as addressable container of anesthetic function-objects. Physics makes holos into a hologram, and computation inverts the natural relation into an artificial, ‘information theoretic’, hypergraphic anti-holos. In the anti-holos perspective, nature is uprooted and decomposed into dustless digital dust. Direct experience is seen as ‘simulation’ or ‘emergent’, non-essential properties which only ‘seem to exist’, while the invisible, intangible world of quantum-theoretical mechanisms and energy rich vacuums are elevated to the status of noumena.
Computationalists Repent! The apocalypse is nigh! 🙂
AI is Still Inside Out

Turn your doodles into madness.
I think this is a good example of how AI is ‘inside out’. It does not produce top-down perception and sensations in its own frame of awareness, but rather it is a blind seeking of our top-down perception from a completely alien, unconscious perspective.
The result is not like an infant’s consciousness learning about the world from the inside out and becoming more intelligent, rather it is the opposite. The product is artificial noise woven together from the outside by brute force computation until we can almost mistake its chaotic, mindless, emotionless products for our own reflected awareness.
This particular program appears designed to make patterns that look like monsters to us, but that isn’t why I’m saying its an example of AI being inside out. The point is that this program exposes image processing as a blind process of arithmetic simulation rather than any kind of seeing. The result is a graphic simulacra…a copy with no original which, if we’re not careful, can eventually tease us into accepting it as a genuine artifact of machine experience.
See also: https://multisenserealism.com/2015/11/18/ai-is-inside-out/
Time for an update (6/29/22) to further demonstrate the point:
Added 5/3/2023:
Stochastic filtering is not how sense actually works, but it can seem like how sense works if you’re using stochastic filtering to model sense.

How Not To Destroy the World With AI – Stuart Russell
Even Wilder-Ass Sh*t
I’m only about 20 minutes in to the video, but I wanted to post some comments before I forget them.
Topic 1. Non-Foundational sets
Take the idea of non-foundational sets and infinite probability distributions of infinite probability distributions but invert it. Literally, invert the language and then conceptualize the result. If I do this and apply it as a hypothesis, the foundation of consciousness and nature in general would be Absolutely foundational setless-ness. Consciousness now no longer needs to be positively asserted as an agent which instantiates itself recursively, rather it is the appearance of unconsciousness which is a negative assertion which is temporarily instantiated by manipulating relative degrees of sensitivity. Think here of how the color black or white are colors which stand in for colorlessness. Desaturating an image is analogous to how conscious experience is truncated into forms and functions which are quantifiable.
Consciousness now no longer needs to be positively asserted as an agent which instantiates itself recursively, rather it is the appearance of unconsciousness within consciousness which is a negative assertion which is temporarily instantiated by manipulating relative degrees of sensitivity. Think here of how black or white are colors which stand in for colorlessness. Desaturating an image is analogous to how conscious experience is truncated into quantifiable, finite forms and functions.*
To continue then: Instead of infinite probability distributions, I propose the inverse: Finite, but absolutely pervasive improbability. If we trace back any consideration of first cause we run into a something that we have to admit is being considered beyond causality and existed ‘just because’. Instead of banishing this miraculous-seeming appearance of ‘existence’ from nothing or vacuum potential or God, I see that all phenomena have some degree of uniqueness, and that uniqueness is by definition idiopathic (aka ‘strongly emergent’). Blue comes out of nothing but itself, and ultimately the uniqueness of any given moment does the same thing. By grounding our view of nature in infinite improbability, we can re-frame our own interest in probability as a function of our subjective desire to defy incompleteness rather than an impartial assessment of nature as a phenomenon.
Topic 2: Hypercomputation
I like this line of thinking. I would suggest thinking first of ‘transcomputation’ rather than hypercomputation, that is, instead of conditions which are inaccessible to computation simply because they exceed quantitative limits of finite-ness, think of finite-ness itself as only the monochrome edges which bound a deeper and dynamically expanding spectrum of universal phenomenology. Because this spectrum is pervasive, communication is a matter of triggering each entity’s unmasking of their own separation from the totality rather than generating a new understanding which is copied from one entity to another.
To communicate is to subtract a seperation between two minds, and the separation between minds and the totality of cognitive truth, and between that totality of truth and the universal phenomenal spectrum. To communicate is to dissolve some of the masking of underlying unity across all phenomena.
Topic 3: Morphic Resonance
Morphe = form, shape. I like Rupert Sheldrake’s famous idea of MR, but again I would invert it. The name ‘Morphic Resonance’ draws us to the exteriors of tangible and visible objects. It implies that forms hold meaning which propagates to other forms. Turning that inside out, I find the resonance to be ‘phoric’ rather than morphic. This neologistic use of the Greek root ‘phor’ (pherein, to carry, bear) is intended to inspire associations with terms like ‘metaphor’, ‘euphoria/dysphoria’ and even ‘semaphore’. Between these three terms, we might glimpse a sign of three ‘primary colors’ of human consciousness: The the personal, the sub-personal or impersonal, and the transpersonal. I am a holon of experiences (‘phoria’), my body is a holon of biochemical code (‘semaphoria’), and my lifetime is a leaf on a branching ‘zeitgeist tree’ of mytho-poetic themes propagating from the top down (‘metaphoria’ or anthro-metaphoria for us humans).
In all cases where we talk about ‘patterns’ (morphe) we should substute ‘sense’ (phor), i.e. instead of an ontological, existential phenomenon, we should think of a phenomenon which expresses itself to itself by self-masking, unmasking, and residuating novelty (in a similar way to the residuation of color from the diffracting of visible light which is ‘white’, i.e. colorless, clear, and representative of visibility itself).
*sidebar: If we have color, we can use it to point to the potential for colorlessness. This pointing can’t be accomplished mechanically because there can’t literally be a color which implies colorlessness, but because of the aesthetic quality of black and white in relation to the other colors, we can pick up on a metaphor. By being able to access the difference between monochrome and color vision, and compare them, we ‘break the fourth wall’ which separates the content of visible phenomena from the modality of our visual sense, and we can carry that metaphorical wall breaking to the larger context of the wall between our personal experience and the totality of all possible experience. We can see that visibility is possible with only one dimension of hue, but only if we have more hues that we can compare it with. If there were no color vision, there would be no way to conceive of more than one type of hue (luminosity).
Fooling Computer Image Recognition is Easier Than it Should Be
This 2016 study, Universal Adversarial Perturbations, demonstrates how the introduction of specially designed low level noise into image data makes state of the art neural networks misclassify natural images with high probability. Because the noise is almost imperceptible to the human eye, I think it should be a clue that image processing technology is not ‘seeing’ images.

It is not only the fact that it is possible to throw off the technology so easily that is significant, but that the kinds of miscalculations that are made are so broad and unnatural. Had the program had any real sense of an image, adding some digital grit to a picture of a coffee pot or plant should not cause a ‘macaw’ hit, but rather some other visually similar object or plant.
While many will choose to see this paper as a suggestion for a need to improve recognition methods, I see it as supporting a shift away from outside-in, bottom-up models of perception altogether. As I have suggested in other posts, all of out current AI models are inside out.
3/16/17 – see also http://www.popsci.com/byzantine-science-deceiving-artificial-intelligence
Dereference Theory of Consciousness
Draft 1.1
IMO machine consciousness will ultimately prove to be an oxymoron, but if we did want to look for consciousness analogs in machine behavior, here is an idea that occurred to me recently:
Look for nested dereferencing, i.e. places where persistent information processing structures load real-time sense data about the loading of all-time sense data.

At the intersection of philosophy of mind, computer science, and quantum mechanics is the problem of instantiating awareness. What follows is an attempt to provide a deeper understanding of the significance of dereferencing and how it applies to integration of information and the quantum measurement problem. This is a broad conjecture about the nature of sensation as it pertains to the function of larger information processes, with an eye toward defining and identifying specific neuroscientific or cognitive signatures to correlate with conscious activity.
A dereference event can be thought of as the precise point in which a system that is designed or evolved to expect a range of inputs receives the real input itself. This sentence, for example, invites an expectation of a terminal clause, the terms of which are expected to be English words which are semantically linked to the rest of the sentence. English grammar provides templates of possible communication, but the actual communication relies on specific content to fill in those forms. The ability to parse English communication can be simulated by unconscious, rule-based mechanisms, however, I suggest that the ability to understand that communication involves a rule-breaking replacement of an existing parse theory with empirical, semantic fact. The structure of a language, its dictionary etc, is a reference body for timeless logic structures. Its purpose is to enable a channel for sending, receiving, and modifying messages which pertain to dereferenced events in real time. It is through the contact with real time sense events that communication channels can develop in the first place, and to continue to self modify.
What is proposed here is an alternative to Multi-World Interpretation of Quantum Wave Function collapse – an inversion of the fundamental assumption in which all possible phenomena diverge or diffract within a single context of concretely sensed events. The wave function collapse in this view is not the result of a measurement of what is already objectively ‘out there’, nor is it the creation of objective reality by subjective experiences ‘in here’, but a quantized return to an increasingly ‘re-contextualized’ state. A perpetual and unpredictable re-acquaintance with unpredictable re-acquaintance.
In programmatic terms, the variable *p is dereferenced to the concrete value (*p = “the current temperature”, dereferenced p = “is now 78 degrees Fahrenheit″). To get to a better model of conscious experience (and I think that this this plugs into Orch OR, IIT, Interface Theory, and Global Workspace), we should look at the nested or double dereferencing operation. The dereferencing of dereferencing (**p) is functionally identical to awareness of awareness or perception of perception. In the *p = “the current temperature” example, **p is “the current check of the current check of the temperature”. This not only points us to the familiar Strange loop models of consciousness, but extends the loop outward to the environment and the environment into the loop. Checking the environment of the environmental check is a gateway to veridical perception. The loop modifies its own capacity for self-modification.
Disclaimer: This description of consciousness as meta-dereferencing is intended as a metaphor only. In my view, information processing cannot generate conscious experience, however, conscious experience can possibly be better traced by studying dereferencing functions. This view differs from Gödel sentences or strange loops in that those structures refer to reference (This sentence is true) while the dereference loop specifically points away from pointers, rules, programs, formal systems, etc and toward i/o conditioning of i/o conditions. This would be a way for information-theoretic principles to escape the nonlocality of superposition and access an inflection point for authentic realization (in public space-time as shared experience). “Bing” = **Φ. In other words, by dereferencing dereference, the potentially concrete is made truly concrete. Sense experience is embodied as stereo-morphic tangible realism and tangible realism is disembodied as a sense of fact-gathering about sensed fact-gatherings.
Dereference theory is an appeal to anti-simulation ontology. Because this description of cognition implicates nested input/output operations across physically or qualitatively *real* events, the subjective result is a reduced set of real sense conditions rather than confabulated, solipsistic phenomenology. The subjective sense condition does not refer only to private or generic labels within a feed-forward thinking mechanism, but also to a model-free foundation and genuine sensitivity of the local ‘hardware’ to external conditions in the public here-and-now. This sensitivity can be conceived initially as universal property of some or all physical substrates (material panpsychism), however, I think that it is vital to progress beyond this assumption toward a nondual fundamental awareness view. In other words, subjective consciousness is derived from a dereference of a local inertial frame of expectation to a universal inertial frame of unprecedented novelties which decay as repetition. Each event is instantiated as an eternally unique experience, but propagated as repetitive/normalized translations in every other frame of experience. That propagation is the foundation for causality and entropy.
Information Theory 1.1
1/25/2016 Information Theory Update
Here are some notes which I hope will provide a more concise understanding about the nature of computation, logic, and mathematics.
Information theories such as those offered by Shannon and Turing give us cause to see an underlying universality of information which is rooted in simple Arithmetic truths such as addition, multiplication, and integers. These arithmetic truths are theories with can be applied successfully to computing machines without regard to their physical substrate*. While this offers a method to deploy universal principles to the control of a specific mechanism, the control which is offered is different in kind from the literal (motor) control of the hardware. Motor control of computer hardware can be accomplished electromagnetically or classically (as with analog clocks with gears powered by spring tension or a gravity pendulum), and now quantum-mechanically to some extent, but not directly by math. Mathematics cannot turn a computer on or keep it running, it can only provide a non-local set of rules which can be localized through motor control.
This is critically important to understand when considering the possibility of Artificial Intelligence: Computation can only be absolutely general or absolutely specific. When we implement a logic circuit, we are not literally imposing philosophical logic on a circuit, rather we are only interpreting the physical changes of a device metaphorically. In short, a logic circuit cannot literally represent a state of 1/0 or True/False, it can only literally present a concrete state of being switched to Stop (Off) or Go (On). This is the territory of computation – what is known as Layer 1 in the seven layer OSI network model**. All higher layers are not physical territories but logical maps – human abstractions projected by software engineers and application users.

For this reason, no computing machine can represent the middle ranges between the absolute generality of mathematical theory and the absolute specificity of a machine’s physical condition. It’s all above-the-line of personal awareness (oceanic metaphor) or below-the-line (granular semaphores). We can get a lot of utility out of these devices, however we can’t get any empathy from them. They can’t care about anything or anyone, since ‘they’ are purely in our imagination.
The philosophically relevant part of what I’m proposing applies to the prospects for generating natural intelligence artificially. AGI that feels as well as thinks is not necessarily desirable, but if my view is on the right track, computers becoming sentient is not something that we need to worry about. It won’t happen. Why? Because mathematics is not accessing the Physical layer from the top down but from the beneath the bottom layer. This means that even though we can use a computing device to validate truth conditions, we can only validate those truths with refer literally to the concrete states of the machine, and those truths which refer figuratively to the universal arithmetic relations. Nothing that a computer does needs to be *about* anything beyond the machine’s physical state, and so any appearance of emotion, intention, sensitivity, etc are purely hypothetical and would violate parsimony. Church-Turing Thesis lays out the framework for universal computing, but in saying that all functions of calculation can be reduced to a-signifying digital steps, we are also saying that all semantic meanings shall be reduced to blind syntax. It cuts both ways.
Isn’t the brain just a biological computer?
No. This is an obsolete idea, for a lot of reasons which I won’t get into here, but suffice it to say, the brain is an organ within a living body which developed organically from a single self-replicating, self-modifying cell. Machines, by contrast, are assembled artificially from naturally unaffiliated substances and parts. That’s not a reason to discount the possibility of sentience through silicon, but it is a reason to go beyond knee-jerk presumptions that continue to dominate thinking about AI. While Turing’s genius is only now beginning to receive the appreciation it deserves, the shortcomings of his Imitation Game approach have not yet been widely understood.
Alan Turing can be pardoned for his reliance on mid-century Behaviorism as a psychological model, since it was very popular at the time and also because, along with others, I suspect that his natural instincts were quite systemizing/autistic. This carries over in modern populations, with autistic-masculine influences far overwhelming the psychotic-feminine influences in computer science and engineering fields. As a result, we have a lot of strong, controlling voices which insist upon reducing psychology to mechanistic terms, and all dimensions of consciousness to processing of logical information. This is so pervasive that any casual conversation online which challenges the supremacy of first-order logic will tend to erupt into a firestorm that ends with something like “Yeah I’m done here. You’re just spouting nonsense“.
To this end, I find this pyramid model for debate at least as important as the other models of information networking:

My call for civility in discussion is not mere political correctness or over-sensitivity, but rather a purely pragmatic consideration. Unlike a computer, the human mind loses its capacity for curiosity and fairness when it falls into aggression. People talk over each other and assert their opinions ever more rigidly and repetitively rather than thinking creatively. This mirrors the action of computation itself – recursive enumeration masquerading as communication.
A great many people think they are thinking when they are merely rearranging their prejudices. – William James
*Not entirely true. The physical substrate of a machine requires precision and solidity. We cannot build a computer out of clouds or fog, it needs to be made of something physical which stays put and which has at least one absolutely persistent read/write capacity. Traditional logic circuits must be implemented physically through a rigid skeleton of readable coordinates.
**It has been popular in recent years to proclaim that the OSI Model is dead. The feeling is that TCP/IP is the predominant protocol suite being used in the real world, and it doesn’t match up with OSI, so we should dump OSI in favor of something like this:

I do see the appeal of this, however, agree with this author that “OSI teaches more of the reasoning behind making multiple layers and what they do. Collapsing the traditional model for the sake of making it look like TCP/IP is going to cause more harm than good.” – Tom Hillingsworth
Notes on Philosophical Incorrigibility
No, this is not about philosophers behaving like unruly children (although at times, they can). Incorrigibility is a term that refers to “a property of a philosophical proposition, which implies that it is necessarily true simply by virtue of being believed. A common example of such a proposition is René Descartes’ “cogito ergo sum” (“I think, therefore I am).“
Symbolic reference cannot ‘break the fourth wall’ – meaning that whatever words or gestures that we use to communicate about something can only refer to things figuratively. This sentence, for example, can’t address you the reader in a literal sense. I can write “Hey you! Yes, you! Stand still laddie!” but it is not really possible for these words to address anyone literally. The same words could come out of a random letter generator, or they could have been written by someone who died before the reader was born. The entire premise that language is meaningful depends on an audience who is able to derive meaning from interpreting messages from that language.
Doxastic logic is a type of modal logic which uses the terms ‘belief’ and ‘proposition’ to formalize, and really digitize the possible relations of belief and truth, including beliefs about beliefs, possible beliefs about possible truths, etc.
Accurate reasoner: An accurate reasoner never believes any false proposition.
Bp→p
Normal reasoner: A normal reasoner is one who, while believing p, also believes he or she believes p.
Bp→BBp
My beef with modal logic is that while it gives us an informative language to talk about mental states, it cannot access the quality of the state itself, and therefore is misguided when applied to the deeper conditions of consciousness itself. There is no modal symbol for ‘wakes up’ or ‘loses consciousness’ because those conditions affect the entire phenomenon from which reason can arise rather than a function of reasoning.
When viewed from an ontological perspective, I think that we would have to consider a proposition to be a kind of belief, even if it is a belief that is assumed to be shared by everyone or every thing. The proposition that “Fire is relatively hot” is itself only a message which is communicated through language. Before we can agree that “p = fire is relatively hot“, we must first agree that
p is literally a sensation: p is seen as a group of adjacent graphic squiggles, or heard as a phonic utterance. p is actually s(’p’), since acoustic vibrations or optical contrasts can’t literally be propositions about fire.
s(’p’) is subconsciously identified as a message (rather than, say decorative art) within our cognitive sense. We think that our sense of ‘p’ means something that we can understand. p is promoted to i(s(’p’)).
i(s(’p’)) is consciously understood as a particular message with a particular meaning. This promotion of i(s(’p’)) to the executive level of sense, where we personally evaluate and act on the contents of messages would be the third nesting of sense u(i(s(’p’))). It is not only cognitive, but articulated on the personal level of cognition.
It should be noted that deconstructing the foundation of classical logic this way is intentional. Logic begins with a philosophical assumption of semantic realism. This is ironic really, especially in doxastic logic when we are concerned with the consistency of reasoning, to being with the assumption of a reasonable universe as a given. p is simply p. A proposition is given, and in that presentation of the proposition, truth is considered (wait for it…) incorrigible. If we want to say that p is false, we would just say ‘not p’. Truth and proposition are equivalent because we are assuming an unquestionable solidity to this fundamental logical unit – a unit which represents facts as they simply are, unconditioned, present without any dependence on ontology. Logic of this sort can be used to diagram systems which remind us of physics or of thinking and communicating, but they begin with the fact of coherence and sense already in place.
By inverting this previously unexamined axiom, I hope to reveal the myth of the logical ‘given’ and replace it with the more skeptical, honest view that logic is derived from sense. Just as a child depends on developing sensorimotor skills prior to developing abstract reasoning skills, all logic derives from deeper levels of sensory experience. Even computer logic relies on the sense of a physical mechanism…the capacity for some substance to detect and project some tangible role in a tangible chain reaction. Abstract logic is always an intangible map that is projected psychologically onto such a tangibly experienced territory. It is this tangibility, this concretely participatory aesthetic spectacle which is doing the work and which can appreciate the benefits of having accomplished it.
In my view, artificial intelligence has a problem, not because there is something special or magical about living creatures, or Homo sapiens, but because it seeks to impose an abstraction onto reality ‘feet first’ as it were. A computer program is a set of propositions which is further proposed to be imitated by a physical machine. Instead of an a sensation which is identified and understood u(i(s(’p’))), there is a ‘p’(’p’(u) (’p’(i) (’p’(s))… a mere proposition of a proposition of an understanding of a proposition of an identification of a proposition of a sensation. Those who have a grasp of why this is different from the natural u(i(s(’p’))) don’t really need an explanation. ‘The map is not the territory’, or ‘the menu is not the meal’ should be enough. Those who do not see the difference, or do not identify why the difference is so significant, or do not understand the specific meaning of the significance are probably approaching the entire question of consciousness from the classical logical orientation. For those people, if there is any possibility at all of their shifting to the new perspective that I am proposing, I think that they would have to begin from the incorrigibility of concrete sense rather than of abstract logic.
Emergent properties can only exist within conscious experience.
…
Neither matter nor information can ‘seem to be’ anything. They are what they are.
It makes more sense that existence itself is an irreducibly sensory-motive phenomenon – an aesthetic presentation with scale-dependent anesthetic appearances rather than a mass-energetic structure or information processing function. Instead of consciousness (c) arising as an unexplained addition to an unconscious, non-experienced universe (u) of matter and information (mi), material and informative appearances arise as from the spatiotemporal nesting (dt) of conscious experiences that make up the universe.
Materialism: c = u(mdt) + c
Computationalism: c = u(idt) + c
Multisense Realism: u(midt) = c(c)/~!c.
Recent Posts
Archives
Tags
Absolute AI alternative physics alt physics anthropology art Artificial Intelligence big questions biocentrism brain Chinese Room computationalism computers consciousness cosmogony cosmology cosmos debate diagram dualism eigenmorphism Einstein emergence entropy explanatory gap free will graphics hard problem hard problem of consciousness information information theory language life light math mathematics metaphysics mind-brain multisense continuum Multisense Realism nature neuroscience panpsychism pansensitivity perception phenomenology Philip Goff philosophy philosophy of mind philosophy of science photon physics psychology qualia quantum quantum physics quora relativity science scientism Searle sensation sense simulation society sound strong ai subjectivity technology theory of everything time TSC universe video visionThis slideshow requires JavaScript.
Blogs I Follow
- The Third Eve
- Shé Art
- Astro Butterfly
- Be Inspired..!!
- Rain Coast Review
- Perfect Chaos
- Amecylia
- SHINE OF A LUCID BEING
- Table 41: A Novel by Joseph Suglia
- Rationalising The Universe
- Conscience and Consciousness
- yhousenyc.wordpress.com/
- DNA OF GOD
- Musings and Thoughts on the Universe, Personal Development and Current Topics
- Paul's Bench
- This is not Yet-Another-Paradox, This is just How-Things-Really-Are...
- Creativity✒📃😍✌
- Catharine Toso
- Political Joint
- zumpoems
Recent Comments