Likelihood is the ultimate unlikelihood: Notes on sense as sole synthetic a priori manifestation of improbability
In the contemporary Western model of the universe, mechanism is presumed to be the sole synthetic a priori. The general noumenal schema which can only be considered an eternal given and without which no phenomena can arise. In particular, the mechanism of statistical probability is seen as the engine of all possibility. Richard Dawkins title “The Blind Watchmaker” is an apt description – a kind of deism with no deity. Lawrence Krauss’ “A Universe From Nothing” is another apt title. The implication of both is that the universality of statistical distribution is the inevitable and inescapable self-evident truth of all phenomena.
What is overlooked in these models is the nature of probability itself – the concept of likelihood, and indeed the concept of ‘like’. The etymology of the word probable extends from French and Latin meanings of ‘provable’ and ‘agreeable’, a sense of credibility. What we like and what we find acceptable are similar concepts which both relate to, well, similarity. Agreement and likeness are in agreement. The two words are a-like. What is like alikeness though? What is similar to similarity or equivalent to equivalence?
Consider the equal sign. “=” is a visual onomatopoeia. It is a direct icon which looks like what it represents. Two parallel lines which illustrate precise congruence by their relation to each other. It’s an effective sign only because no further description is possible. So ubiquitous is the sense of comparison by similarity that we can’t easily get under it. It simply is the case that one line appears identical to the other, and when something is identical to another thing, we can notice that, and it doesn’t matter if its a thought, feeling, sensation, experience…anything can be similar to something. It could be said also that anything can be similar to anything in some sense. The universe can’t include something which is not similar to the universe in the exact way in which constitutes its inclusion. Inclusion by definition is commonality and commonality is some kind of agreement.
Agreement is not a concept, it is the agent of all coherence, real and imagined – all forms and functions, all things and experiences are coherent precisely because they are ‘like’ other things and experiences, and that there is (to quote David Chalmers) ‘something that it is like’ to experience those phenomena. Without this ontological glue, this associative capacity which all participants in the universe share, there can be no patterns or events, no consistency or parts, only unrelated fragments. That would truly be a universe from nothing, but it would not be a universe.
The question then of where this capacity for agreement comes from is actually moot, since we know that nothing can come from anything which does not already possess this synthetic a priori capacity for inclusion – to cohere as that which seems similar in some sense to itself in spite of dissimilarity in other ways. Something that happens which is similar to something that happened at a different time is said to be happening again. A thing which is similar to another thing in a different location can be said to be ‘the same kind of thing’. This is what consciousness is all about and it is what physics, mathematics, art, philosophy, law, etc are all about. It is what nature is all about. The unity behind multiplicity and the multiplicity behind unity. Indra’s Net, Bohm’s Implicate Order, QM’s vacuum energy, etc, are all metaphors for this same quality…a quality which is embodied as metaphor itself in human psychology. Metaphor is meta-likeness. It links essential likeness across the existential divide. Metaphor bridges the explanatory gap, not by explanation, but by example. Like the = sign, the medium is the message.
Aside from their duty of ‘ferrying-over meaning’ from the public example to private experience and private example to public application, metaphors tell the story of metaphors themselves. Implicitly within each metaphor is the bootstrap code, the instruction set for producing metaphors. Metaphor is the meta-meme and memes are meta-metaphors. This self nesting is a theme (a meme theme, ugh) of sense, and a hint that sense itself is insuperable. Mathematically, you could say that the axiom of foundation is itself a non-well-founded set. The rule of rules does not obey any rules. Regularity is, by definition, the cardinal irregularity, as it can only emerge from its own absence if it emerges at all. If it does not emerge, then is still the cardinal exception to its own regularity since everything else in the universe does emerge from something. First cause then, by being uncaused itself, is the ultimate un-likelihood. First cause by definition is singular and cannot be like anything else and there can be nothing that it is like to be it. At the same time, everything that is not the first cause is like the first cause and there is something that it is like to be that difference from the first cause – some aesthetic dissimilarity which constitutes some sense of partial separation (diffraction).
To get at the probability which is assumed by the Western mindset’s mechanistic universe, we have to begin with the Absolutely improbable. This is akin to realizing that dark is the absence of light when it was formerly assumed that dark was only something which could be added to a light background. Improbability is the fundamental, the synthetic a priori from which commonality is derived. Statistical analysis is a second or third order abstraction, not a primary mechanism. The primary mechanism is likeness itself, not likelihood. Likelihood follows from likeness, which follows from Absolute uniqueness, from the single all-but-impossible Everythingness rather than a plurality of inevitable nothingness.
really enjoyed this consideration
Thank you! I appreciate that.