Archive
Destroying the “World”
Borrowing this nice diagram (above) from a post by Ethan Hein, I have cannibalized it to show how the concept of the “world” can be transcended.
John Locke’s decision to make properties of bodies in space “primary” and properties of experience “secondary” reveals the Western bias toward the public and away from the private. In this way, all bodies are assumed to have an independent presence outside of any perspective from which they might be viewed, and experiences are assumed to be entirely dependent upon the interaction of physical bodies.
The twentieth century should have given us a clue. With Freud and Jung revealing that the depths of human psychology transcended our conscious expectations, and Einstein proving the relativity of mass, energy, time, and space, the surprises of Quantum Mechanics very nearly opened the door to a fully integrated worldview in the 20th century. As if mirroring the turning of the political tide, the 1980s began to turn progressive relativity on its head, and restore a kind of digital absolute. Instead of profound principles of contextual aesthetics, the revolution in physics championed a model of blind probability and computation.
The model that I propose does not contain a “world” which is independent of concrete aesthetics. What we see and feel is not the entirety of what can be seen and felt, but neither is it a “model” of an unfelt, unseen “world.” It is easy to think of parts of our brain as mapping to a model of our body. Different regions of the brain correspond to particular regions of the body. The same is true, however, of our emotions and thoughts. To be consistent, our emotions and thoughts would also have to be models, not of the brain (because the brain is part of the body, which is only a model), but just models period.
There is a double standard that leaks in with the Western-Lockean model. If we say that the body we experience is a model of the body in the world, then we are stuck with the consequence that the mind we experience is also a model of part of that same body in the world. Except that it clearly isn’t. What we think about is not modeled isomorphically in the activity of the brain. There is no computation that looks like cranberry sauce tastes, certainly not without one of these imaginative/imaginary “minds” to make the connection.
If we instead take the unreality of our model seriously, it makes more sense to turn the whole configuration inside out. If our experience models the brain’s activities, then so too must our experience of the world be a model. Since it is in that modeled world that we find the brain in the first place, we now have no reason to believe that the primary properties of bodies in space are really primary. In fact, the whole notion of primary and secondary, interior and exterior, could only be part of the modeling process. There is no indication of any kind of noumenal ‘world’ other than the inferences which we make through phenomenal experience.
To the contrary, all reports from explorers of consciousness report a deep unity of awareness – a vastness of united presence or absence which underlies all phenomena. We do not see a Platonic factory of disembodied mathematics behind the curtain of secondary forms. In fact, forms themselves are completely irrelevant to mathematics. Geometry as we know it, shapes and angles and lines, is entirely superfluous to a quantum-digital universe. Geometry is the stuff of visual presentation and tactile, tangible manipulation. There is no geometry in a vacuum, no visible ‘bits’ or digital bodies which must draw these characters as you see them on the screen. What point could there be of modeling the invisible with the visible? What computer needs to see itself compute?
It works much better if we flip the model over, and see that the glue which holds mathematics together is consciousness. When we infer that a quantity is diminishing toward zero, we are inferring that intellectually. It is a practice of intuition or telepathy – a logical feeling that we have about patterns and what they imply. Bohm’s implicate order, I would say, can be understood more clearly as private physics. Not a disembodied order, but the precipitation of lower order sense within higher order sense. The emergence of cymatic patterns, for instance, in a layer of salt on a vibrating drum, is not a higher geometry which unites the salt, it is an exposure of more primitive logics – repetitive, dumb representations. Cosmic wallpaper.
Higher intelligence requires not only adding ‘complexity’ to such dumb representations, or increasing the computing resources, but an increase in sensitivity to implicit depths. The multiplexing of sensory contexts is subtractive to the point of simplicity. Something like pain or red is not a complex representation, but just the opposite, a simple and direct presence. These qualities could not be any more primary, from our perspective. It is through this primordial simplicity that true novelty ‘diverges’ from the absolute. Unrepeatable moments made of unrepeatable moments which are made to seem to repeat when viewed from a distance. The “world” is a creation of distancing, of the alienated perspective of elaborately nested subjectivity.
My response (top) to a diagram that I came across (lower). Some differences include:
- Outer edge is a continuum between “Everything” and “Almost Nothing” rather than “Nothing”
This reflects the idea that nothing cannot exist except as an expectation that something has about the absence of everything. It is therefore presence, rather than absence which is the primordial identity, and all phenomena are defined by substitutable gaps in pansensitivity. Awareness is localized by entropic masking or insensitivity rather than mechanical projection on top of “nothing”.
- Art – Aesthetics shares equal if not slightly greater prominence with Law – Mathematics
This overturns the Western assumption that appreciation of phenomena is a side effect of functionality. While locally true, for example, that humans like sugar because of its evolutionary value, the specific pleasure of sweet flavor is not itself describable by function, nor can it be assembled mechanically. That the universe is fundamentally an aesthetic agenda which works in order to play rather than the other way around is one of the major consequences of Primordial Identity Pansensitivity. The universe is a feeler of experiences, not just a producer of unfelt mechanisms.
- Color vs Greyscale connotes the relation between the concrete-experiential and the abstract-measured as one of reductionism rather then essentialism.
The idea here is that the rational is only a higher octave of the empirical, and the empirical is only an objectified reduction of the subjective-aesthetic. There is one continuous spectrum of sensitivity which reflects itself as desaturated forms and functions.
The top down and bottom up arrows show the circulation of intentional sequence and unintentional consequence throughout the continuum. From the pansensitivity pole on the top, where all substitutable gaps of sensitivity have been filled in and sense is total, to the pan-entropy pole on the bottom, where the ratio of gap to connection is almost infinitely great, a picture of cosmos emerges as a hyperplasticity of perspective.
- Synchronic and selective are new additions to the sensory-motive side. I think that it might work to call them electro-synchronic and magneto-selective. Electric force would seem to embody the gap-jumping, meta-phoric principle of sense-making, while magnetic fields are about orientation and masses moving themselves in relation to each other.
Beyond Probability
What is the probability that “pattern” can exist at all?
Can’t be calculated.
Because we cannot empirically calculate it, or because it is ontologically incalculable? Are there other comparably incalculable examples, or do all incalculables share a common ‘absolute’ quality (relating to primordial origins, consciousness, etc)
Ontologically incalculable. There is no way to measure the total possible outcomes.
In another way, it is super-calculable. Only one outcome results in the possibility of measurement. The probability of pattern existing in a universe in which the question of probability can be asked would be 100%…or even “infinitely greater” than 100%, as there is no possibility at all of measurement in a context that is devoid of pattern.
In another sense, the possibility of pattern is the most improbable condition in the sense that no ‘probability’ can precede pattern. Probability is an expectation of a particular type of pattern so by definition, pattern is not just incalculable, but pre-calculation (calculations are also pattern recognition).
Fair enough…
Defining Consciousness, Life, Physics
One of the more popular objections to any proposal for explaining consciousness is that the term consciousness is too vague, or that any explanation depends on what way the term is used. I disagree. The nature of electricity does not depend on what people think the word means, and I don’t think that consciousness does either. When someone is knocked unconscious, there is little doubt about what it means. In general terms, it means that they are not personally present. They are not personally affected by their environment, nor can they intentionally cause any effects on their environment.
Is that an agreeable place to start for everyone?
Can we agree also, in light of the physiology of the brain-stem, which consists of sensory neural pathways and motor neural pathways, that the concept of consciousness is at least closely identified with input/output?
Can we agree that it could be possible that input/output could be sufficient to describe the fundamental nature of consciousness? Does consciousness need to be something further than that?
Here is where, in my view, the whole dependency of definition comes in. The issue is that input/output can either be conceptualized from the exterior or the interior. The Western perspective, even when it tries to model the interior perspective of i/o, does so from the outside in. It assumes that the proprietary feeling of subjectivity is fundamentally inauthentic – that a system can only be built from generic conditions, laws, processes, etc, and cannot be truly original in any sense. In this way, no neuroscientific account, or cog-sci account, can really claim an inside-looking-out perspective. The Western orientation does not allow for the possibility that person as a whole could act as an irreducibly singular receiver of experience an originator of physical cause. Taking a cue from relativity, however, I suggest that perceptual integrity is identical to inertial framing, so that the frame as a whole can drive the micro-frame conditions within it, and vice versa. This is not vertical emergence from the bottom up, but parallel emergence. Multiple levels of description.
Going back to consciousness being definable in terms of its difference from unconsciousness, we can see that the difference between the two has some similarity between life and death. Can we agree that life too differs from death in that it relates to input/output for an organism and its environment?
We understand that an animal can be unconscious without being dead, but is this a difference in degree or a difference in kind? Could input/output also be sufficient to define “life”. We might say that life includes reproduction and growth, however even a single cell organism which is not reproducing or growing at any given time is considered a form of life. Does that not seem that the quality of environmental sensitivity and the ability to cause biochemical effects in response to that sensitivity are even more essential to defining life?
To sum up then, I am asking:
1) Doesn’t being conscious really just mean the ability to receive sense and project motive?
2) Doesn’t life really mean the same thing on a lower, level?
From there, I would ask
3) Isn’t sense what we really mean by a ‘field’, and motive what we mean by a ‘force’?
4) Using relativity as a intuitive guide, can’t it be said that the concept of ‘field’ or ‘force’ are really metaphors, and that the way we contribute to human society is identical to the way that any vector of sense contributes to its context? Isn’t consciousness just a form of life which is just a form of physics…which is just a form of sensory-motive interaction?
Wittgenstein, Physics, and Free Will
JE: My experience from talking to philosophers is that WIttgenstein’s view is certainly contentious. There seem to be two camps. There are those seduced by his writing who accept his account and there are others who, like me, feel that Wittgenstein expressed certain fairly trivial insights about perception and language that most people should have worked out for themselves and then proceeded to draw inappropriate conclusions and screw up the progress of contemporary philosophy for fifty years. This latter would be the standard view amongst philosophers working on biological problems in language as far as I can see.
Wittgenstein is right to say that words have different meanings in different situations – that should be obvious. He is right to say that contemporary philosophers waste their time using words inappropriately – any one from outside sees that straight away. But his solution – to say that the meaning of words is just how they are normally used, is no solution – it turns out to be a smoke screen to allow him to indulge his own prejudices and not engage in productive explanation of how language actually works inside brains.
The problem is a weaseling going on that, as I indicated before, leads to Wittgenstein encouraging the very crime he thought he was clever to identify. The meaning of a word may ‘lie in how it is used’ in the sense that the occurrences of words in talk is functionally connected to the roles words play in internal brain processes and relate to other brain processes but this is trivial. To say that meaning is use is, as I said, clearly a route to the W crime itself. If I ask how do you know meaning means use you will reply that a famous philosopher said so. Maybe he did but he also said that words do not have unique meanings defined by philosophers – they are used in all sorts of ways and there are all sorts of meanings of meaning that are not ‘use’, as anyone who has read Grice or Chomsky will have come to realise. Two meanings of a word may be incompatible yet it may be well nigh impossible to detect this from use – the situation I think we have here. The incompatibility only becomes clear if we rigorously explore what these meanings are. Wittgenstein is about as much help as a label on a packet of pills that says ‘to be taken as directed’.
But let’s be Wittgensteinian and play a language game of ordinary use, based on the family resemblance thesis. What does choose mean? One meaning might be to raise in the hearer the thought of having a sense of choosing. So a referent of ‘choose’ is an idea or experience that seems to be real and I think must be. But we were discussing what we think that sense of choosing relates to in terms of physics. We want to use ‘choose’ to indicate some sort of causal relation or an aspect of causation, or if we are a bit worried about physics still having causes we could frame it in terms of dynamics or maybe even just connections in a spacetime manifold. If Wheeler thinks choice is relevant to physics he must think that ‘choose’ can be used to describe something of this sort, as well as the sense of choosing.
So, as I indicated, we need to pin down what that dynamic role might be. And I identified the fact that the common presumption about this is wrong. It is commonly thought that choosing is being in a situation with several possible outcomes. However, we have no reason to think that. The brain may well not be purely deterministic in operation. Quantum indeterminacy may amplify up to the level of significant indeterminacy in such a complex system with so powerful amplification systems at work. However, this is far from established and anyway it would have nothing to do with our idea of choosing if it was just a level of random noise. So I think we should probably work on the basis that the brain is in fact as tightly deterministic as matters here. This implies that in the situation where we feel we are choosing THERE IS ONLY ONE POSSIBLE OUTCOME.
The problem, as I indicated is that there seem to be multiple possible outcomes to us because we do not know how are brain is going to respond. Because this lack of knowledge is a standard feature of our experience our idea of ‘a situation’ is better thought of as ‘an example of an ensemble of situations that are indistinguishable in terms of outcome’. If I say when I get to the main road I can turn right or left I am really saying that I predict an instance of an ensemble of situations which are indistinguishable in terms of whether I go right or left. This ensemble issue of course is central to QM and maybe we should not be so surprised about that – operationally we live in a world of ensembles, not of specific situations.
So this has nothing to do with ‘metaphysical connotations’ which is Wittgenstein’s way of blocking out any arguments that upset him – where did we bring metaphysics in here? We have two meanings of choose. 1. Being in a situation that may be reported as being one of feeling one has choice (to be purely behaviourist) and 2. A dynamic account of that situation that turns out not to agree with what 99.9% of the population assume it is when they feel they are choosing. People use choose in a discussion of dynamics as if it meant what it feels like in 1 but the reality is that this use is useless. It is a bit like making burnt offerings to the Gods. That may be a use for goats but not a very productive one. It turns out that the ‘family resemblance’ is a fake. Cousin Susan who has pitched up to claim her inheritance is an impostor. That is why I say that although to ‘feel I am choosing’ is unproblematic the word ‘choice’ has no useful meaning in physics. It is based on the same sort of error as thinking a wavefunction describes a ‘particle’ rather than an ensemble of particles. The problem with Wittgenstein is that he never thought through where his idea of use takes you if you take a careful scientific approach. Basically I think he was lazy. The common reason why philosophers get tied in knots with words is this one – that a word has several meanings that do not in fact have the ‘family relations’ we assume they have – this is true for knowledge, perceiving, self, mind, consciousness – all the big words in this field. Wittgenstein’s solution of going back to using words the way they are ‘usually’ used is nothing more than an ostrich sticking its head in the sand.
So would you not agree that in Wheeler’s experiments the experimenter does not have a choice in the sense that she probably feels she has? She is not able to perform two alternative manoeuvres on the measuring set up. She will perform a manoeuvre, and she may not yet know which, but there are no alternatives possible in this particular instance of the situation ensemble. She is no different from a computer programmed to set the experiment up a particular way before particle went through the slits, contingent on a meteorite not shaking the apparatus after it went through the slits (causality is just as much an issue of what did not happen as what did). So if we think this sort of choosing tells us something important about physics we have misunderstood physics, I beleive.
Nice response. I agree almost down the line.
As far as the meaning of words go, I think that no word can have only one meaning because meaning, like all sense, is not assembled from fragments in isolation, but rather isolated temporarily from the totality of experience. Every word is a metaphor, and metaphor can be dialed in and out of context as dictated by the preference of the interpreter. Even when we are looking at something which has been written, we can argue over whether a chapter means this or that, whether or not the author intended to mean it. We accept that some meanings arise unintentionally within metaphor, and when creating art or writing a book, it is not uncommon to glimpse and develop meanings which were not planned.
To choose has a lower limit, between the personal and the sub-personal which deals with the difference between accidents and ‘on purpose’ where accidents are assumed to demand correction, and there is an upper limit on choice between the personal and the super-personal in which we can calibrate our tolerance toward accidents, possibly choosing to let them be defined as artistic or intuitive and even pursuing them to be developed.
I think that this lensing of choice into upper and lower limits, is, like red and blue shift, a property of physics – of private physics. All experiences, feelings, words, etc can explode into associations if examined closely. All matter can appear as fluctuations of energy, and all energy can appear as changes in the behavior of matter. Reversing the figure-ground relation is a subjective preference. So too is reversing the figure-ground relation of choice and determinism a subjective preference. If we say that our choices are determined, then we must explain why there is a such thing as having a feeling that we choose. Why would there be a difference, for example, in the way that we breathe and the way that we intentionally control our breathing? Why would different areas of the brain be involved in voluntary control, and why would voluntary muscle tissue be different from smooth muscle tissue if there were no role for choice in physics? We have misunderstood physics in that we have misinterpreted the role of our involvement in that understanding.
We see physics as a collection of rules from which experiences follow, but I think that it can only be the other way around. Rules follow from experiences. Physics lags behind awareness. In the case of humans, our personal awareness lags behind our sub-personal awareness (as shown by Libet, etc) but that does not mean that our sub-personal awareness follows microphysical measurables. If you are going to look at the personal level of physics, you only have to recognize that you can intend to stand up before you stand up, or that you can create an opinion intentionally which is a compromise between select personal preferences and the expectations of a social group.
Previous Wittgenstein post here.
The Primacy of Spontaneous Unique Simplicity
This post is inspired by a long running (perpetual?) debate that I have going with a fellow consciousness aficionado who is a mathematics professor. He has some unique insights into artificial intelligence, particularly where advanced interpretations of the likes of Gödel, Turing, Kleene open up to speculations on the nature of machine consciousness. One of his results has been sort of a Multiple Worlds Interpretation in which numbers themselves would replace metaphysics, so that things like matter become inevitable illusions from within the experience of Platonic-arithmetic machines.
His theory is perhaps nowhere crystallized more understandably than in his Universal Dovetailer Argument (UDA) in which there is a single machine which runs through every possible combination of programs, thereby creating everything that can be possible from basic arithmetic elements such as numbers, addition, and multiplication. This is based on the assumption that computation can duplicate the machinery which generates human consciousness – which is the assumption that I question. Below, I try to run through a treatment where the conceptual problems of computationalism lie, and how to get passed them by inverting the order in which his UD (Universal Dovetailer) runs. Instead of a program that mechanically writes increasingly complex programs, some of which achieve a threshold of self-awareness, I use PIP (Primordial Identity Pansensitivity) to put sense first and numbers second. Here’s how it goes:
I. Trailing Dovetail Argument (TDA)
A. Computationalism makes two ontological assumptions which have not been properly challenged:
- The universality of recursive cardinality
- Complexity driven novelty.
Both of these, I intend to show, are intrinsically related to consciousness in a non-obvious way.
B. Universal Recursive Cardinality
Mathematics, I suggest is defined by the assumption of universal cardinality: The universe is reducible to a multiplicity of discretely quantifiable units. The origin of cardinality, I suggest, is the partitioning or multiplication of a single, original unit, so that every subsequent unit is a recursive copy of the original.
Because recursiveness is assumed to be fundamental through math, the idea of a new ‘one’ is impossible. Every instance of one is a recurrence of the identical and self-same ‘one’, or an inevitable permutation derived from it. By overlooking the possibility of absolute uniqueness, computationalism must conceive of all events as local reproductions of stereotypes from a Platonic template rather than ‘true originals’.
A ‘true original’ is that which has no possible precedent. The number one would be a true original, but then all other integers represent multiple copies of one. All rational numbers represent partial copies of one. All prime numbers are still divisible by one, so not truly “prime”, but pseudo-prime in comparison to one. One, by contrast, is prime, relative to mathematics, but no number can be a true original since it is divisible and repeatable and therefore non-unique. A true original must be indivisible and unrepeatable, like an experience, or a person. Even an experience which is part of an experiential chain that is highly repetitive is, on some level unique in the history of the universe, unlike a mathematical expression such as 5 x 4 = 20, which is never any different than 5 x 4 = 20, regardless of the context.
I think that when we assert a universe of recursive recombinations that know no true originality, we should not disregard the fact that this strongly contradicts our intuitions about the proprietary nature of identity. A generic universe would seem to counterfactually predict a very low interest in qualities such as individuality and originality, and identification with trivial personal preferences. Of course, what we see the precise opposite, as all celebrity it propelled by some suggestion unrepeatability and the fine tuning of lifestyle choices is arguably the most prolific and successful feature of consumerism.
If the experienced universe were strictly an outcropping of a machine that by definition can create only trivially ‘new’ combinations of copies, why would those kinds of quantitatively recombined differences such as that between 456098209093457976534 and 45609420909345797353 seem insignificant to us, but the difference between a belt worn by Elvis and a copy of that belt to be demonstrably significant to many people?
C. Complexity Driven Novelty
Because computationalism assumes finite simplicity, that is, it provides only a pseudo-uniqueness by virtue of the relatively low statistical probability of large numbers overlapping each other precisely. There is no irreducible originality to the original Mona Lisa, only the vastness of the physical painting’s microstructure prevents it from being exactly reproduced very easily. Such a perfect reproduction, under computationalism is indistinguishable from the original and therefore neither can be more original than the other (or if there are unavoidable differences due to uncertainty and incompleteness, they would be noise differences which we would be of no consequence).
This is where information theory departs from realism, since reality provides memories and evidence of which Mona Lisa is new and which one was painted by Leonardo da Vinci at the beginning of the 16th century in Florence, Italy, Earth, Sol, Milky Way Galaxy*.
Mathematics can be said to allow for the possibility of novelty only in one direction; that of higher complexity. New qualities, by computationalism, must arise on the event horizons of something like the Universal Dovetailer. If that is the case, it seems odd that the language of qualia is one of rich simplicity rather than cumbersome computables. With comp, there can be no new ‘one’, but in reality, every human experience is exactly that – a new day, a new experience, even if it often seems much like the one before. Numbers don’t work that way. Each mechanical result is identical. A = A. A does not ‘seem much like the A before, yet in a new way‘. This is a huge problem with mathematics and theoretical physics. They don’t get the connection between novelty and simplicity, so they hope to find it out in the vastness of super-human complexity.
II. Computation as Puppetry
I think that even David Chalmers, who I respect immensely for his contributions to philosophy of mind and in communicating the Hard Problem missed the a subtle but important distinction. The difference between a puppet and a zombie, while superficially innocuous, has profound implications for the formulation of a realistic critique of Strong AI. When Chalmers introduced or popularized the term zombie in reference to hypothetical perfect human duplicates which lack qualia and subjective experience, he inadvertently let an unscientific assumption leak in.
A zombie is supernatural because it implies the presence of an absence. It is an animated, un-dead cadaver in which a living person is no longer present. The unconsciousness of a puppet, however, is merely tautological – it is the natural absence of presence of consciousness which is the case with any symbolic representation of a character, such as a doll, cartoon, or emoticon. A symbolic representation, such as Bugs Bunny, can be mass produced using any suitable material substance or communication media. Even though Bugs is treated as a unique intellectual property, in reality, the title to that property is not unique and can be transferred, sold, shared, etc.
The reason that Intellectual Property law is such a problem is because anyone can take some ordinary piece of junk, put a Bugs Bunny picture on it, and sell more of it than they would have otherwise. Bugs can’t object to having his good name sullied by hack counterfeiters, so the image of Bugs Bunny is used both to falsely endorse an inferior product and to falsely impugn the reputation of a brand. The problem is, any reasonable facsimile of Bugs Bunny is just as authentic, in an Absolute sense, as any other. The only true original Bugs Bunny is the one we experience through our imagination and the imagination of Mel Blanc and the Looney Tunes animators.
The impulse to reify the legitimacy of intellectual property into law is related to the impulse to project agency and awareness onto machines. As a branch of the “pathetic fallacy” which takes literally those human qualities which have been applied to non-humans as figurative conveniences of language, the computationalistic fallacy projects an assumed character-hood on the machine as a whole. Reasoning (falsely, I think) that since all that our body can see of ourselves is a body, it is the body which is the original object from which the subject is produced through its functions. Such a conclusion, when we begin from mechanism, seems unavoidable at first.
III. Hypothesis
I propose that we reverse the two assumptions of mathematics above, so that
- Recursion is assumed to be derived from primordial spontaneity rather than the other way around.
- Novelty can only be meaningful if it re-asserts simplicity in addition to complexity.This would mean:
- The expanding event horizon of the Universal Dovetailer would have to be composed of recordings of sensed experiences after the fact, rather than precursors to subjective simulation of the computation.
- Comp is untrue by virtue of diagonalization of immeasurable novelty against incompleteness.
- Sense out-incompletes arithmetic truth, and therefore leaves it frozen in stasis by comparison in every instant, and in eternity.
- Computation cannot animate anything except through the gullibility of the pathetic fallacy.
This may seem like an unfair or insulting to the many great minds who have been pioneering AI theory and development, but that is not my intent. By assertively pointing out the need to move from a model of consciousness which hinges on simulated spontaneity to a model in which spontaneity can never, by definition be simulated, I am trying to express the importance and urgency of this shift. If I am right, the future of human understanding depends ultimately on our ability to graduate from the cul-de-sac of mechanistic supremacy to the more profound truth of rehabilitated animism. Feeling does compute because computation is how the masking of feeling into a localized unfeeling becomes possible.
IV. Reversing the Dovetailer
By uncovering the intrinsic antagonism between the above mathematical assumptions and the authentic nature of consciousness, it might be possible to ascertain a truer model of consciousness by reversing the order of the Universal Dovetailer (machine that builds the multiverse out of programs).
- The universality of recursive cardinality reverses as the Diagonalization of the Unique
- Complexity driven novelty can be reversed by Pushing the UD.
A. Diagonalization of the Unique
Under the hypothesis that computation lags behind experience*, no simulation of a brain can ever catch up to what a natural person can feel through that brain, since the natural person is constantly consuming the uniqueness of their experience before it can be measured by anything else. Since the uniqueness of subjectivity is immeasurable and unprecedented within its own inertial frame, no instrument from outside of that frame can capture it before it decoheres into cascades of increasingly generic public reflections.
PIP flips the presumption of Universal Recursive Cardinality inherent in mathematics so that all novelty exists as truly original simplicity, as well as a relatively new complex recombination, such that the continuum of novelty extends in both directions. This, if properly understood, should be a lightning bolt that recontextualizes the whole of mathematics. It is like discovering a new kind of negative number. Things like color and human feeling may exploit the addressing scheme that complex computation offers, but the important part of color or feeling is not in that address, but in the hyper-simplicity and absolute novelty that ‘now’ corresponds to that address. The incardinality of sense means that all feelings are more primitive than even the number one or the concept of singularity. They are rooted in the eternal ‘becoming of one’; before and after cardinality. Under PIP, computation is a public repetition of what is irreducibly unrepeatable and private. Computation can never get ahead of experience, because computation is an a posteriori measurement of it.
For example, a computer model of what an athlete will do on the field that is based on their past performance will always fail to account for the possibility that the next performance will be the first time that athlete does something that they never have done before and that they could not have done before. Natural identities (not characters, puppets, etc) are not only self-diagonalizing, natural identity itself is self-diagonalization. We are that which has not yet experienced the totality of its lifetime, and that incompleteness infuses our entire experience. The emergence of the unique always cheats prediction, since all prediction belongs to the measurements of an expired world which did not yet contain the next novelty.
B. Pushing the UD – If the UD is a program which pulls the experienced universe behind it as it extends, the computed realm, faster than light, ahead of local appearances. It assumes all phenomena are built bottom up from generic, interchangeable bits. The hypothesis under PIP is that if there were a UD, it would be pushed by experience from the top down, as well as recollecting fragments of previous experiences from the bottom up. Each experience decays from immeasurable private qualia that is unique into public reflections that are generic recombinations of fixed elements. Reversing the Dovetailer puts universality on the defense so that it becomes a storage device rather than a pseudo-primitive mechina ex deus.
The primacy of sense is corroborated by the intuition that every measure requires a ruler. Some example which is presented as an index for comparison. The uniqueness comes first, and the computability follows by imitation. The un-numbered Great War becomes World War II only in retrospect. The second war does not follow the rule of world wars, it creates the rule by virtue of its similarities. The second war is unprecedented in its own right, as an original second world war, but unlike the number two, it is not literally another World War I. In short, experiences do not follow from rules; rules follow from experience.
V. Conclusions
If we extrapolate the assumptions of Compuationalism out, I think that they would predict that the painting of the Mona Lisa is what always happens under the mathematical conditions posed by a combination of celestial motions, cells, bodies, brains, etc. There can be no truly original artwork, as all art works are inevitable under some computable probability, even if the the particular work is not predictable specifically by computation. Comp makes all originals derivatives of duplication. I suggest that it makes more sense that the primordial identity of sense experience is a fundamental originality from which duplication is derived. The number one is a generic copy – a one-ness which comments on an aspect of what is ultimately boundaryless inclusion rather than naming originality itself.
Under Multisense Realism (MSR), the sense-first view ultimately makes the most sense but it allows that the counter perspective, in which sense follows computation or physics, would appear to be true in another way, one which yields meaningful insights that could not be accessed otherwise.
When we shift our attention from the figure of comp in the background of sense to the figure of sense in the background of comp, the relation of originality shifts also. With sense first, true originality makes all computations into imposters. With computation first, arithmetic truth makes local appearances of originality artifacts of machine self-reference. Both are trivially true, but if the comp-first view were Absolutely true, there would be no plausible justification for such appearances of originality as qualitatively significant. A copy and an original should have no greater difference than a fifteenth copy and a sixteenth copy, and being the first person to discover America should have no more import than being the 1,588,237th person to discover America. The title of this post as 2013/10/13/2562 would be as good of a title as any other referenceable string.
*This is not to suggest that human experience lags behind neurological computation. MSR proposes a model called eigenmorphism to clarify the personal/sub-personal distinction in which neurological-level computation corresponds to sub-personal experience rather than personal level experience. This explains the disappearance of free will in neuroscientific experiments such as Libet, et. al. Human personhood is a simple but deep. Simultaneity is relative, and nowhere is that more true than along the continuum between the microphysical and the macrophenomenal. What can be experimented on publicly is, under MSR, a combination of near isomorphic and near contra-isomorphic to private experience.
Perspectives on Gravity
“The universe is shaped exactly like the Earth,
If you go straight long enough you’ll end up where you were.” – Modest Mouse
Science
Newton conceived of universal gravitation as a ratio of mass to distance.
“…every point mass in the universe attracts every other point mass with a force that is directly proportional to the product of their masses and inversely proportional to the square of the distance between them.”
Einstein revolutionized classical gravity with General Relativity, merging space with time and formulating the equivalence of mass and energy. Rather than a rigid Cartesian plenum of 3-D space and a one dimensional timeline, Einstein saw a flexible, four dimensional ‘mollusk’ of spacetime contoured by the relations of matter and energy. GR, along with Special Relativity, made the universe a much stranger place, with time dilation, black holes, the relativity of simultaneity, and the constancy of the speed of light as a universal absolute.
Since quantum theory begins at the other end of the cosmological continuum of size, there has been a continuity problem between sub-nuclear physics and astrophysics. Quantum doesn’t match up with relativity very well, so the quest to find a bridge between the two has been a prominent open question for contemporary physics.
Here are a some brief signposts along that highway between QM and GR:
In most, though not all, theories of quantum gravity, the gravitational field itself is also quantized. Since the contemporary theory of gravity, general relativity, describes gravitation as the curvature of spacetime by matter and energy, a quantization of gravity seemingly implies some sort of quantization of spacetime geometry. Insofar as all extant physical theories rely on a classical spacetime background, this presents not only extreme technical difficulties, but also profound methodological and ontological challenges for the philosopher and the physicist. Though quantum gravity has been the subject of investigation by physicists for over eighty years, philosophers have only just begun to investigate its philosophical implications.
Gravity makes quantum superposition decohere into classical physics.
Weak gravitational waves that fill the Universe are enough to disturb quantum superpositions and ensure that large objects behave according to classical physics. […] Many theorists now believe that macroscopic superpositions, in which numerous quantum components must maintain a precise relationship with each other, are disrupted by continual environmental influences. Such disturbances, acting differently on each component of a superposition, “decohere” it into a classical state that is, say, dead or alive, but not both. Even a system as small as an atom requires extraordinary protection from stray electromagnetic fields in the lab to remain in a superposition. Since gravitational fields are both pervasive and inescapable, researchers have proposed that they play a fundamental role in ensuring that macroscopic systems behave in a classical way.
We confirm, in this context, that the dynamics of a Brownian particle driven by space-time dependent fluctuations evolves towards Hamiltonian chaos and fractional diffusion. The corresponding motion of the particle has a time-dependent and nowhere vanishing acceleration. Invoking the equivalence principle of general relativity leads to the conclusion that fractional diffusion is locally equivalent to a transient gravitational field. It is shown that gravity becomes renormalizable as Newton’s constant converges towards a dimensionless quantity.
Modified Newtonian Dynamics (MOND) were proposed to explain the galaxy rotation problem. Unexpectedly, when it was first observed, the velocity of rotation of galaxies appeared to be uniform: Newton’s theory of gravity predicts that the farther away an object is from the center of the galaxy it belongs to, the lower its velocity will be (for example, the velocity of a planet orbiting a star decreases as the distance between them increases). These observations gave birth to the idea that a halo of invisible stuff was surrounding each galaxy: dark matter.
In this new model, the gravitational field still increases as you near the black hole’s core. But unlike previous models, this doesn’t end in a singularity. Instead gravity eventually reduces, as if you’ve come out the other end of the black hole and landed either in another region of our universe, or another universe altogether. Despite only holding for a simple model of a black hole, the researchers – and Ashtekar – believe the theory may banish singularities from real black holes too.
Metaphors and Symmetries
Switching gears from the scientific sense of gravity to the personal sense, there are some worthwhile themes to explore. The etymology of gravity links heaviness with seriousness. Gravity relates to grave, and groove. Digging ditches and engraving (scratching). The association with burial and death probably accounts for the connection from grave to words like serious, severe, and swear. The idea of a sworn oath or an engraved ring relates to a sense of a permanent pledge. There is an intent to hold on steadily against all odds, or all distraction. The root of swear crosses over to answer also – a hint that ‘saying’ something out loud can have serious or permanent consequences.
Serious or grave subjects are often called ‘heavy’ or ‘dense’ while frivolous topics are ‘light’ or refer to things which are airy (fluff, puff pieces). Insubstantial or insincere talk is ‘blowing smoke’. Both the literal and figurative meanings of heavy (literal = heavy weight; figurative = heavy important) have light as an antonym, but it is light in two different figurative senses. The antonym of the literal sense of light is dark, which comes back around to gravity in the form of black holes, where the intensity of gravity does not allow light to escape. It could be said that a black hole is a star’s grave.
Under the influence of gravity, weight, density, and pressure increase. Movement becomes more difficult and slow. More power is required to exert the same force. Metaphorically there is a lot of crossover – feelings of stress are compared to being ‘under a lot of pressure’ is associated with risk or powerlessness. Resistance and inertia figure in, as does entropy. Under pressure, time becomes more valuable, and the tolerance for distractions (nonsense), is lowered. Ideally, the significance of the goal should be worth the effort. Monumental investments expect monumental results.
If electromagnetism is the ‘Spring ‘ of matter’s energy, then gravity is its Fall. If energy is a fountain which lights the matter into significance, then gravity is the drain which flattens entropy and reverses its disposition into a one dimensional, time slowing presence – mass. Said another way, gravity is the metabolism of spacetime, and the embodied force of entropy.
Dr. Thanu Padmanabhan of the Inter-University Center for Astronomy and Astrophysics in Pune, India said Gravity “is the thermodynamic limit of the statistical mechanics of “atoms of space-time.”
Erik Verlinde, 48, a respected string theorist and professor of physics at the University of Amsterdam, is quoted as saying that gravity is “entropic force.”
Gravity’s symmetry with electromagnetism extends to the metaphysical. The etymology of the words burden and bear go back to the word for ‘birth’. Themes of give and take, and birth and death, wrap around each other. The idea of curvature, of entropy statistically evening out odd statistics and jagged exceptions is an expression of magnitude and relativity. The pull of gravity doesn’t make things spin or orbit, but since the number of velocities that a body can have is so much greater than the number of ways a body can be stationary, entropy ensures that most everything is moving somewhere, and gravity pulls light things close to heavy things faster than heavy things are pulled to light things, causing the lighter moving thing to wrap its path around the heavier mass in an ellipse.
With a black hole, and on Earth, gravity and entropy suggests a connection to loss and absence. Ultimately, gravity shows that even absence turns back on itself, since it can only ever be the sense of its own absence – the presence of the absence of presence. Sense can only diminish relative to itself, it can only appear to be slow or missing by comparison. Gravity is about falling, collapsing, and squeezing the space and time out of incidents to make them co-incidents with shared inertia. Gravity is the force of pseudointentionality, the entropy of entropy. If perception elides its blindness and entropy to concentrate significance, gravity elides in the opposite way, through quantitative density. Anomalies are crushed and drowned into smooth curves until they explode. Stars explode into clouds which collect into other stars, scars of stars, and galactic spiral clouds of stars.
Are teleonomy, evolution, entropy, and gravity the same thing? If electromagnetism and energy represent uniqueness and creativity on every level, gravity and entropy are a statistical rounding off of all of that uniqueness across all the inertial frames. It settles everything into hierarchies of magnitude on the outside and figurative scales of greatness (importance) on the inside.
Extra Credit
Gravity isn’t directly related to time. Although much our timekeeping is modeled after astronomical cycles, neither the rotation of the Earth nor its heliocentric orbit are caused by gravity alone. It seems easy to mistakenly guess that planets have gravity because they spin, as if it were some kind of centripetal force, but the gravity would be almost the same if Earth were not spinning, and gravity itself is not causing the spin in the first place. What we think is that planets condensed from moving clouds of cosmic debris, and when they become smaller, the motion becomes faster (conservation of angular momentum, like a figure skater pulling their arms in for the faster spin).
As far as gravity is concerned, the Earth and Sun only need be drawn together, all orbits, spins, and tilts in the solar system are the residual effects of the events which initially accelerated the cloud of matter into motion or changed its direction. The tilt of the Earth is thought to be the result of collisions with other massive objects during its early history. Without the tilt, you would have no seasons as every position of the Earth’s orbit would produce no noticeable difference. Same with the spin. Gravity doesn’t care if the Earth spins or not.
What gravity does do for time is provide conditions of relative permanence that would not exist otherwise. Without gravity we could still keep track of cycles of time, but they would be forever be changing completely as our view of the universe changes permanently from a non-orbiting planet hurtling aimlessly through space. Gravity provides a frame of circularity which allows greater degrees of order in our perception. Gravity doesn’t make time, but it makes it more relevant.
P, PP, PIP, MSR Disambiguation
Pansensitivity (P) proposes that sensation is a universal property.
Primordial Pansensitivity (PP) proposes that because sensation is primitive, mechanism is derived from insensitivity. Whether it is mechanism that assumes form without sensibility (materialism) or function without sensation (computationalism), they both can only view feeling as a black box/epiphenomenon/illusion.
Under PP, both materialism and computationalism make sense as partial negative images of P, so that PP is the only continuum or capacity needed to explain feeling and doing (sense-motive), objective forms and functions (mass-energy), and informative positions and dispositions (space-time).
PP says that the appearance of forms and functions are, from an absolute perspective, sensory-motive experiences which have been alienated through time and across space.
Primordial Identity Pansensitivity (PIP) adds to the Ouroboran Monism of PP, (sense twisted within itself = private experience vs public bodies) by suggesting that PP is not only irreducible, but it is irreducibility itself.
PIP suggests that distance is a kind of insensitivity, so that all other primitive possibilities which are grounded in mechanism, such as information or energy, are reductionist in a way which oversignifies the distanced perspective, while anthropomorphic primitives such as love or divinity are holistic in a way which oversignifies the local perspective. Local and distant are assumed to be Cartesian opposites, but PIP maps locality and distance as the same in terms of being two opposite branches of insensitivity. Both the holistic and reductionist views ignore the production of distance which they both rely on for their perspective, both take perspective itself, perception, and relativity for granted.
MSR (Multisense Realism) tries to rehabilitate reductionism and holism by understanding them as bifocal strategies which arise naturally, each appropriate for a particular context of perceived distance. Both are near-sighted and far-sighted in opposite ways, as the subject seeks to first project anthropomorphism outward onto the world and then, following a crisis of disillusionment, seeks the opposite – to project exterior mechanism into the self. MSR invites us to step outside of the bifocal antagonism and into a balanced appreciation of the totality.
Recent Comments