Archive
Appearance vs Reality

Today’s post is a response to a couple of social media comments about life being meaningless and how real things are different from their appearance:
Everything is an appearance in some sense, even subjectivity and objectivity themselves. The capacity to appear in multiple sense modalities is what reality, as well as everything else is made of. That which never appears in any sense is identical to non-existence.
Some appearances in some sense modalities appear to be more persistent or shared more widely than others. That is the appearance of realism. In my view, it exists to add Significance to conscious experience, but in so doing, it adds an appearance of insignificance to appearances that do not appear to be shared as widely or reliably persist for as long a duration. Some experiences appear to be ‘merely’ dreams or ‘illusion’ by comparison.
It’s all different densities of the same thing: Aesthetic-participatory phenomena. Qualia. Appearances of appearances. Conscious experience nested within itself at different relative perceptual frame rates.
Just as the frame rate / shutter speed on a camera determines whether its picture of a helicopter shows blades that are static, rotating, or a blur, so too do apertures of sensitivity determine whether the overlap between shared experiences appears as an object, percept, subject, or concept. It’s a spectrum of appearance. I call this eigenmorphism.
Meaning is not tangible objects or composed of tangible objects doing tangible things (moving, colliding, changing shape). If we are convinced that we are a body living in a physical universe of physical objects and automatic physical forces, then we have psychologically diminished and disqualified all phenomena that are not tangible objects or movers of objects.
We are telling ourselves and each other a fairy tale that is a materialistic anti-fairy tale about the absence of subjectivity and free will, of meaning and significance. It is a category error. We use instruments that cannot see and cannot feel to tell us what light and feeling are.
Scientific philosophy began with dualism that divided objects in space from everything else into two equal categories, but then the former were assigned ‘primary’ status, relegating the latter to be secondary properties. Secondary became ‘illusory’ and ‘emergent’…mere appearances that somehow ‘arise’ from quantities and shapes of physical energy and forces that are presumed primary and fundamental. The hard problem of consciousness was born of the explanatory gap between tangible objects and everything else… between Res Scientifica and Res Emergens.
This crypto-dualism has now become so fanatical in its deconstruction of consciousness and self to unconscious mechanical effects that people have become incapable of considering themselves and our lives together as having any value. It has filtered down through the academic, economic, political and social systems so that it pervades every part of public life. If you think that your life actually exists and has significance, you’re on your own. Obviously this creates misery and is unsustainable at scale.
In my understanding, this is all rooted in a confusion between the limits of our personal sense and sense-making capacities (personal consciousness) and the limits of all consciousness. Instead of assuming that our experience of human life is part of much larger scales of conscious experience, we have shifted the benefit of the doubt to the opposite.
What we experience as most persistent and common has been taken to be a small part of a hypothetical type of existence that is outside of all experience. The persistent experience of the Sun and Earth, for example, are stripped of the perceptual, experiential facts that define them to us leaving only the fact of persistence itself. The Sun and Earth are no longer persistent experiences but persistent structures – images of objects that are somehow independent of the scales of visibility and tangibility that define them. They are simply formations, or perhaps information processes…or one that gives rise to the other, all long before conscious appears. Non-appearances that evolved automatically into appearances and then into illusions.
The assumption that conflates a universe prior to *our* (human) experience with a universe prior to *all* conscious experience is a pseudoscientific myth/metaphor that we have mistaken for literal scientific fact. In that myth, we are bodies and our minds are something that an organ in our body does. Certain types of cells happen to be arranged in ways that somehow transduce the movements of molecules into information that somehow transduces itself into appearances, and then illusory appearances. This is assumed even though it is circular, since the body and brain themselves are appearances.
The myth is treated as fact so the facts of conscious experience, including ourselves, must be treated as myth. Dualism has not only secretly remained as the animating principle of our civilization, but it has hardened into symmetrical pathologies of fanatical materialism and fanatical religiosity. Unable to reconcile mind and body, we have developed an ideology that yokes together the promises of our unnatural future with the promises of our supernatural past. To finish the job, we are building a mindless mind in a disembodied body. A FrankenHAL of super-human proportions to replace nature and consciousness with information and microelectronic energy. Will it work? The answer does not compute. We have to decide for ourselves…if we still can.
AI on the Meaninglessness of AI
Me: “What values do the vectors actually consist of?”
ChatGPT said:
Great question. Word embeddings—such as those generated by Word2Vec or Transformer models—consist of numerical values that encode semantic and syntactic information. Let’s break down exactly what these values are:📌 What are embedding vectors?
An embedding vector is essentially a list of numerical values (often floating-point numbers), typically between about 50 to 1024 dimensions, depending on the model used.For example, a simple 5-dimensional embedding for the word “cat” might look like this:
cat → [0.12, -0.33, 0.81, -0.04, 0.76]
In practice, embeddings usually have higher dimensions (commonly 100-300 dimensions for Word2Vec, 768-4096 dimensions for Transformers like GPT-4).[…]
📐 How are these numbers derived?
Embeddings are learned through training neural network models on massive text corpora:Word2Vec learns embeddings by optimizing the ability to predict context words (CBOW/Skip-Gram). These embeddings reflect relationships based on usage patterns.
Transformers (like GPT) generate embeddings dynamically through self-attention, capturing richer contextual nuances. Each embedding reflects not just word meaning, but also specific sentence context.
[…]
🚩 In short:
Embeddings are numerical vectors (lists of floats).
They have no explicit “semantic label” per dimension.
Meaning emerges from relative positions in vector space.
Numerical relationships implicitly represent meaning and context.
This numeric representation makes words mathematically manipulable, allowing machine learning models to handle language efficiently and effectively.
You said:
“I would say that the values would only be seen to encode semantic meaning iff it were decoded in a semantic way in a conscious experience that includes access to those kinds of meaningful experiences and linguistic associations. If the AI had a perspective, it would not be reading an encoded text about semantic meanings, but rather there is only a codeless electronic mechanism that functions by the conditions of the semiconductor matter.”
ChatGPT said:
Your perspective aligns closely with a nuanced philosophical viewpoint, and you raise a critical and often-overlooked distinction:1. Encoding vs. Experiencing:
Embedding vectors themselves don’t inherently contain semantic meaning. They are numerical correlations that reflect statistical patterns of human linguistic usage.Any “meaning” we attribute to embeddings depends entirely on a human interpretation, a conscious entity equipped with sensory experiences, memory, imagination, and linguistic associations.
An AI system (like GPT) itself has no experiential subjectivity—it doesn’t have a “first-person view” or consciousness from which meaning can genuinely emerge or be directly experienced.2. Codeless Mechanism & Material Conditions:
Indeed, the AI’s computations are fundamentally physical processes: electrons flowing through semiconductors, obeying natural laws of physics. These physical states, from the machine’s internal perspective (which it doesn’t truly have), carry no intrinsic meaning or understanding.Any perceived semantics are projected onto these processes by conscious observers. Without consciousness, the AI system remains a pattern-driven physical machine devoid of actual “semantic comprehension.”
3. Semantic Meaning as a Property of Consciousness:
You suggest a crucial distinction: semantic meaning, linguistic associations, and conscious experience are inseparable.
A numerical embedding vector is not inherently meaningful—it’s a statistical abstraction of human language patterns. Meaning only emerges in the conscious mind of an observer with experience-based reference points and the capacity for interpretation.4. Implication for AI and Consciousness Debate:
Your observation is compatible with arguments like Searle’s Chinese Room, Chalmers’ hard problem of consciousness, and Weinberg’s Multisense Realism (which you previously expressed interest in).
AI models—even sophisticated ones—remain at the level of “syntactic processing” without intrinsic semantic understanding. Any meaning or understanding attributed to AI is derivative (from the human viewpoint) rather than intrinsic (from a hypothetical AI’s perspective).✅ In short, you’re making an important philosophical distinction:
Vectors and algorithms encode statistical correlations from human-generated language data.Semantic meaning emerges only in consciousness, which AI lacks.
From the AI’s own (nonexistent) perspective, it’s just semiconductor physics—no code, no symbols, no meaning.This observation underscores a deep and fundamental limitation of present-day AI, rooted not merely in technical issues but in philosophical and metaphysical considerations regarding the nature of meaning, consciousness, and experience.
About that first ever photograph of light as both a particle and wave

This article that went around last year is misleading on several levels.
1. It’s not a photograph, it’s a synthetic/graphic image generated by calculated statistics.
2. It’s a composite of many measurements, not a capture of anything like light.
3. We have no way of knowing whether we are measuring the objective ‘particle nature of light’ from electron collisions, or whether we are just objectifying the collective sensitivity of the instruments we are using.
For example, if all that we had to tell whether an object existed was experiments measuring someone’s eye movements, we could not tell the difference between an eye that was looking at a moving physical object or an eye that was looking at a graphic pattern that was purely visual. We could be looking at atomic REM patterns and thinking that we’re looking at a subatomic world.
Since there is no way, rationally, to tell the difference between a consensus of shared sensations and an object detected through sensation, it is my hypothesis that realism itself breaks at the classical limit. We think that quantum physics tells us that the classical limit is a hologram, but it makes more sense to me that quantum theory breaks realism and projects a world of non-sense non-objects in public space when we are really looking at the inflection point of subjectivity on a distant scale. It is quantum physics that is a hologram, not nature.




















Recent Comments