Archive
Obstruction of Solitude: A Guide To Noise
“And then…all the noise! All the noise, noise, noise, noise!
If there’s one thing I hate…all the noise, noise, noise, noise!
And they’ll shriek, squeak, and squeal racing round on their wheels,
Then dance with jin-tinglers tied onto their heels!” – The Grinch“Karma police, arrest this man
He talks in maths
He buzzes like a fridge
He’s like a detuned radio” – Radiohead
It might be asked, “Why should we care about noise?” Two reasons come to mind.
1) To reduce, contain, or otherwise avoid it.
2) To understand what isn’t noise, and why we prefer that.
Real Noise
The general use of the word noise refers to an unpleasant sound. Even on this most literal level, there is a sense of denial about the extent to which unpleasant qualities are subjective. The stereotypical parent, upon hearing the stereotypical teenager’s musical taste being played at high volume, may yell something like “Turn off that infernal noise!”. There is a sense that the sound demands to be labeled objectively as a terrible thing to listen to, rather than as a sound which presents itself differently according to one’s state of mind or development.
At the same time, we cannot rule out all objective, or at least pseudo-objective qualities related to signal and noise. A garage recording of a metal band or a jackhammer attacking the pavement can be uncontroversially defined as being ‘noisy’, particularly in comparison to other, more gentle sounds. ‘Real noise’, then, seems to have a range of subjective and objective qualifiers. Loud, percussive sounds are inherently noisy to us humans, and we have reason to assume the same is true for animals and even plants:
“Dorothy Retallack tried experimenting with different types of music. She played rock to one group of plants and, soothing music to another. The group that heard rock turned out to be sickly and small whereas the other group grew large and healthy. What’s more surprising is that the group of plants listening to the soothing music grew bending towards the radio just as they bend towards the sunlight.” – source
Whether we enjoy loud, percussive sounds is a matter of taste and context. Even the most diehard metal fan probably does not want to hear their favorite band blasting at five o’ clock in the morning from a passing car. Being able to control what we listen to contributes to our perception of it as noise.
Obstruction, Distraction, Destruction, and Leaks
Whether a piece of music offends our personal taste, or it is simply so loud that we can’t ‘hear ourselves think’, the experience of being distracted seems central to its status as noise. In the parlance of sound engineers, and later Silicon Valley schmoozers, the ‘signal to noise ratio’ describes this feature of noise to distract or divert attention from the intended communication. Noise not only obstructs access to the signal, the disturbance that it causes also detracts from the quality of the signal itself. If the signal to noise ratio is poor enough, it may not be worth the effort for the receiver to try to interpret it, and communication is destroyed.
This sense of noise as an obstacle to communication extends beyond audio or electronic signals to any context where information is accessed, transmitted, or stored. In his influential work on telecommunications, Clause Shannon described information entropy as those features of a signal which are costly to compress. Typically it is those patterns which cannot be easily discerned as either part of the intentional signal or part of the background noise. Despite the tremendous computational resources available for mobile communication, the signal quality on mobile devices are still generally inferior to land lines. Between microphone gating that clips off conversation instead of ambient street sounds and the loss of packets due to radio broadcast conditions or network routing conditions, it is amazing that it sounds as good as it does, but it is still a relatively leaky way to transmit voices.
Neural Noise and Withdrawal
Every sense has its own particular kind of noise. Vision has glare, blur, phosphene patterns (‘seeing stars’). Touch has non-specific tingling or itching. Olfactory and gustatory senses encounter foul odors or bad aftertastes. Feelings like nausea and dizziness which are unrelated to food or balance conditions are a kind of noise (noise is etymologically related to nausea and noxious). Part of the effect of withdrawal from an addiction that the brain becomes overly sensitized to irritating stimuli in general. It’s almost like an allergic response in that the systems which would ordinarily protect us from threats is distracted by a false threat and turned on itself. Our sensitivity to the environment, having been hijacked by an external supply of pleasurable signals, has built up a tolerance for those super-saturated instructions.
With any kind of addiction, even healthy ones like exercise or washing your hands, the nature of sense is to accommodate and normalize perceptions which are present regularly. Because the addiction provides positive reinforcement regularly, there is an artificially low noise ratio which invites your senses to recalibrate to listen more closely to the noise (which would be quite adaptive evolutionarily, you would want to still hear that tiger or smell that smoke even when you enjoy a lifestyle of hedonism and decadence). When the source of positive distraction is removed, the sensitivity to negative distraction is still cranked up to 11, which of course, taps into the original motivation to escape the negative distractions of life with an addiction in the first place. We want something to soothe our nerves, to numb the sensitivity and quiet the noise.
A Recipe For Noise
There seem to be general patterns which are common to many kinds of noise. Noise can either be an obstructing presence, or a conspicuous absence (like the dropouts on a phone call). It can be a public or a private condition which clouds judgment, invites impatience, frustration, and intolerance. Noise can be that which is incoherent, irrelevant, redundant, or inappropriate. Some signals can be temporarily irrelevant or incoherent, while others are permanently so. Besides being too loud, an audio noise can also be soft, such as a hiss or other aesthetic defect that exposes leaky conditions in the recording process. The context is important, as with withdrawal from addiction, our senses are more attuned to the relativity of sensation rather than objective measurement. Grey looks darker next to black than it does next to white.
Our ability to use our attention to pivot from foreground to background is part of what defines the difference between signal and noise, or sense and nonsense. We can all relate to the Charlie Brown effect, where the words that a teacher says are reduced to unintelligible vocalizations. As you read these words now, you may be scanning over so much tedious verbiage that looks like generic wordiness more than any particular message. Any signal can be a noise if you don’t pay attention to it in the right way, and any noise can be used as a meaningful code or symbol. Perhaps there is a way to get over our addictions a little easier if we can learn to see our irritation and cravings as a sign that we are on the right path to restoring our neurological gain.
Many Cures
The destruction of information or the suppression of noise is not as simple as it may seem. Take, for example, the difference between analgesic, anesthetic, and narcotic effects. Pain can be relieved systemically, locally, or simply by being made to seem irrelevant. It can be selectively suppressed or wiped out as part of an overall deadening of sensation. There are other ways to get pain relief besides pharmaceuticals as well. Athletes or soldiers are known to perform with severe injuries, and many people have endured astonishing hardships for the sake of their family without being fully aware of the pain they were in. While there may be endogenous pharmacology going on which accounts for the specific pain suppression, it is ultimately the context which the subject is conscious of which drives the release of endorphins and other neurotransmitters.
Semiotics of Noise
Looking at noise from a Piercean perspective, it can be seen as a failure of semiosis – a broken icon, symbol, or index. A broken index would be something like tinnitus or a phantom limb. The signal we are receiving does not correspond to the referent that we expect, and in fact corresponds only to a problem with the signaling mechanism, or some deeper problem. A signal which is broken as an index but can be understood meaningfully as a symptom of something else (maybe the tinnitus is due to a sinus infection) has reverted from a teleological index to a teleonomic* index. It coincides with a condition, but does not represent it faithfully in any way. It is noise in the sense that the expected association must be overlooked intentionally to get to the unintentional association to a symptom.
A broken index would also be one which we deem irrelevant. This type of noise, which would include the proliferation of automatic alerts, false alarms, flashing lights, spam, etc. There may be nothing wrong with what what the message is saying, but considerations of redundancy, and context inappropriateness makes it clear that what a computer thinks is important and what we think is important are very different things. This type of noise fails at the pragmatic level. It’s not that we don’t understand the message, or that its not for us, it’s that we don’t want to do anything about it.
Broken icons and symbols would similarly be made incoherent, irrelevant, or inappropriate by lacking enough syntactic integrity or semantic content to justify positive attention. Fragmented texts or degenerated signs can fail to satisfy functionally or aesthetically, either on their own, or due to intrusions from outside of the intended communication channel. The overall function of noise is to decompose. Like the odor of something that has spoiled, disorder and decay are symptoms of entropy. In the schema of cosmic metabolism, entropy is the catabolic phase of forms and functions – a kind of toll exacted by space and time which ensures that whatever rises to the threshold of existence and importance, will eventually destabilize, its differences de-tuning to indifference.
What Noise Tells Us About Signals
If we begin with the premise that signal and noise are polar opposites, then it may be useful to look at the opposite of some of the terms and concepts that have just been discussed. If noise is irrelevant, inappropriate, incoherent, and redundant, then the qualities which make something significant or important should include being relevant, appropriate, coherent, and essential. Where noise obstructs, distracts, and destroys, sense instructs, attracts, and constructs. Where noise is noxious and disgusting, signals soothe and give solace.
In the larger picture of self and consciousness, it is our solitude that is threatened by noise. Solitude, like solidity and structure are related to low entropy. It is the feeling of strong continuity and coherence, a silent background from which all moments of sound and fury are foregrounded. It is what receives all signals and insulates all noise. Integrated information? Maybe. The Philosopher’s Stone? Probably.
*teleonomy describes conditions of causality which are driven by blind statistics rather than sensible function. Evolution, for example, is a teleonomy since it does not care which species live or die, it is only those who happen to have been better suited to their ecological niche which end up reproducing most successfully.
Wittgenstein, Physics, and Free Will
JE: My experience from talking to philosophers is that WIttgenstein’s view is certainly contentious. There seem to be two camps. There are those seduced by his writing who accept his account and there are others who, like me, feel that Wittgenstein expressed certain fairly trivial insights about perception and language that most people should have worked out for themselves and then proceeded to draw inappropriate conclusions and screw up the progress of contemporary philosophy for fifty years. This latter would be the standard view amongst philosophers working on biological problems in language as far as I can see.
Wittgenstein is right to say that words have different meanings in different situations – that should be obvious. He is right to say that contemporary philosophers waste their time using words inappropriately – any one from outside sees that straight away. But his solution – to say that the meaning of words is just how they are normally used, is no solution – it turns out to be a smoke screen to allow him to indulge his own prejudices and not engage in productive explanation of how language actually works inside brains.
The problem is a weaseling going on that, as I indicated before, leads to Wittgenstein encouraging the very crime he thought he was clever to identify. The meaning of a word may ‘lie in how it is used’ in the sense that the occurrences of words in talk is functionally connected to the roles words play in internal brain processes and relate to other brain processes but this is trivial. To say that meaning is use is, as I said, clearly a route to the W crime itself. If I ask how do you know meaning means use you will reply that a famous philosopher said so. Maybe he did but he also said that words do not have unique meanings defined by philosophers – they are used in all sorts of ways and there are all sorts of meanings of meaning that are not ‘use’, as anyone who has read Grice or Chomsky will have come to realise. Two meanings of a word may be incompatible yet it may be well nigh impossible to detect this from use – the situation I think we have here. The incompatibility only becomes clear if we rigorously explore what these meanings are. Wittgenstein is about as much help as a label on a packet of pills that says ‘to be taken as directed’.
But let’s be Wittgensteinian and play a language game of ordinary use, based on the family resemblance thesis. What does choose mean? One meaning might be to raise in the hearer the thought of having a sense of choosing. So a referent of ‘choose’ is an idea or experience that seems to be real and I think must be. But we were discussing what we think that sense of choosing relates to in terms of physics. We want to use ‘choose’ to indicate some sort of causal relation or an aspect of causation, or if we are a bit worried about physics still having causes we could frame it in terms of dynamics or maybe even just connections in a spacetime manifold. If Wheeler thinks choice is relevant to physics he must think that ‘choose’ can be used to describe something of this sort, as well as the sense of choosing.
So, as I indicated, we need to pin down what that dynamic role might be. And I identified the fact that the common presumption about this is wrong. It is commonly thought that choosing is being in a situation with several possible outcomes. However, we have no reason to think that. The brain may well not be purely deterministic in operation. Quantum indeterminacy may amplify up to the level of significant indeterminacy in such a complex system with so powerful amplification systems at work. However, this is far from established and anyway it would have nothing to do with our idea of choosing if it was just a level of random noise. So I think we should probably work on the basis that the brain is in fact as tightly deterministic as matters here. This implies that in the situation where we feel we are choosing THERE IS ONLY ONE POSSIBLE OUTCOME.
The problem, as I indicated is that there seem to be multiple possible outcomes to us because we do not know how are brain is going to respond. Because this lack of knowledge is a standard feature of our experience our idea of ‘a situation’ is better thought of as ‘an example of an ensemble of situations that are indistinguishable in terms of outcome’. If I say when I get to the main road I can turn right or left I am really saying that I predict an instance of an ensemble of situations which are indistinguishable in terms of whether I go right or left. This ensemble issue of course is central to QM and maybe we should not be so surprised about that – operationally we live in a world of ensembles, not of specific situations.
So this has nothing to do with ‘metaphysical connotations’ which is Wittgenstein’s way of blocking out any arguments that upset him – where did we bring metaphysics in here? We have two meanings of choose. 1. Being in a situation that may be reported as being one of feeling one has choice (to be purely behaviourist) and 2. A dynamic account of that situation that turns out not to agree with what 99.9% of the population assume it is when they feel they are choosing. People use choose in a discussion of dynamics as if it meant what it feels like in 1 but the reality is that this use is useless. It is a bit like making burnt offerings to the Gods. That may be a use for goats but not a very productive one. It turns out that the ‘family resemblance’ is a fake. Cousin Susan who has pitched up to claim her inheritance is an impostor. That is why I say that although to ‘feel I am choosing’ is unproblematic the word ‘choice’ has no useful meaning in physics. It is based on the same sort of error as thinking a wavefunction describes a ‘particle’ rather than an ensemble of particles. The problem with Wittgenstein is that he never thought through where his idea of use takes you if you take a careful scientific approach. Basically I think he was lazy. The common reason why philosophers get tied in knots with words is this one – that a word has several meanings that do not in fact have the ‘family relations’ we assume they have – this is true for knowledge, perceiving, self, mind, consciousness – all the big words in this field. Wittgenstein’s solution of going back to using words the way they are ‘usually’ used is nothing more than an ostrich sticking its head in the sand.
So would you not agree that in Wheeler’s experiments the experimenter does not have a choice in the sense that she probably feels she has? She is not able to perform two alternative manoeuvres on the measuring set up. She will perform a manoeuvre, and she may not yet know which, but there are no alternatives possible in this particular instance of the situation ensemble. She is no different from a computer programmed to set the experiment up a particular way before particle went through the slits, contingent on a meteorite not shaking the apparatus after it went through the slits (causality is just as much an issue of what did not happen as what did). So if we think this sort of choosing tells us something important about physics we have misunderstood physics, I beleive.
Nice response. I agree almost down the line.
As far as the meaning of words go, I think that no word can have only one meaning because meaning, like all sense, is not assembled from fragments in isolation, but rather isolated temporarily from the totality of experience. Every word is a metaphor, and metaphor can be dialed in and out of context as dictated by the preference of the interpreter. Even when we are looking at something which has been written, we can argue over whether a chapter means this or that, whether or not the author intended to mean it. We accept that some meanings arise unintentionally within metaphor, and when creating art or writing a book, it is not uncommon to glimpse and develop meanings which were not planned.
To choose has a lower limit, between the personal and the sub-personal which deals with the difference between accidents and ‘on purpose’ where accidents are assumed to demand correction, and there is an upper limit on choice between the personal and the super-personal in which we can calibrate our tolerance toward accidents, possibly choosing to let them be defined as artistic or intuitive and even pursuing them to be developed.
I think that this lensing of choice into upper and lower limits, is, like red and blue shift, a property of physics – of private physics. All experiences, feelings, words, etc can explode into associations if examined closely. All matter can appear as fluctuations of energy, and all energy can appear as changes in the behavior of matter. Reversing the figure-ground relation is a subjective preference. So too is reversing the figure-ground relation of choice and determinism a subjective preference. If we say that our choices are determined, then we must explain why there is a such thing as having a feeling that we choose. Why would there be a difference, for example, in the way that we breathe and the way that we intentionally control our breathing? Why would different areas of the brain be involved in voluntary control, and why would voluntary muscle tissue be different from smooth muscle tissue if there were no role for choice in physics? We have misunderstood physics in that we have misinterpreted the role of our involvement in that understanding.
We see physics as a collection of rules from which experiences follow, but I think that it can only be the other way around. Rules follow from experiences. Physics lags behind awareness. In the case of humans, our personal awareness lags behind our sub-personal awareness (as shown by Libet, etc) but that does not mean that our sub-personal awareness follows microphysical measurables. If you are going to look at the personal level of physics, you only have to recognize that you can intend to stand up before you stand up, or that you can create an opinion intentionally which is a compromise between select personal preferences and the expectations of a social group.
Previous Wittgenstein post here.
The Primacy of Spontaneous Unique Simplicity
This post is inspired by a long running (perpetual?) debate that I have going with a fellow consciousness aficionado who is a mathematics professor. He has some unique insights into artificial intelligence, particularly where advanced interpretations of the likes of Gödel, Turing, Kleene open up to speculations on the nature of machine consciousness. One of his results has been sort of a Multiple Worlds Interpretation in which numbers themselves would replace metaphysics, so that things like matter become inevitable illusions from within the experience of Platonic-arithmetic machines.
His theory is perhaps nowhere crystallized more understandably than in his Universal Dovetailer Argument (UDA) in which there is a single machine which runs through every possible combination of programs, thereby creating everything that can be possible from basic arithmetic elements such as numbers, addition, and multiplication. This is based on the assumption that computation can duplicate the machinery which generates human consciousness – which is the assumption that I question. Below, I try to run through a treatment where the conceptual problems of computationalism lie, and how to get passed them by inverting the order in which his UD (Universal Dovetailer) runs. Instead of a program that mechanically writes increasingly complex programs, some of which achieve a threshold of self-awareness, I use PIP (Primordial Identity Pansensitivity) to put sense first and numbers second. Here’s how it goes:
I. Trailing Dovetail Argument (TDA)
A. Computationalism makes two ontological assumptions which have not been properly challenged:
- The universality of recursive cardinality
- Complexity driven novelty.
Both of these, I intend to show, are intrinsically related to consciousness in a non-obvious way.
B. Universal Recursive Cardinality
Mathematics, I suggest is defined by the assumption of universal cardinality: The universe is reducible to a multiplicity of discretely quantifiable units. The origin of cardinality, I suggest, is the partitioning or multiplication of a single, original unit, so that every subsequent unit is a recursive copy of the original.
Because recursiveness is assumed to be fundamental through math, the idea of a new ‘one’ is impossible. Every instance of one is a recurrence of the identical and self-same ‘one’, or an inevitable permutation derived from it. By overlooking the possibility of absolute uniqueness, computationalism must conceive of all events as local reproductions of stereotypes from a Platonic template rather than ‘true originals’.
A ‘true original’ is that which has no possible precedent. The number one would be a true original, but then all other integers represent multiple copies of one. All rational numbers represent partial copies of one. All prime numbers are still divisible by one, so not truly “prime”, but pseudo-prime in comparison to one. One, by contrast, is prime, relative to mathematics, but no number can be a true original since it is divisible and repeatable and therefore non-unique. A true original must be indivisible and unrepeatable, like an experience, or a person. Even an experience which is part of an experiential chain that is highly repetitive is, on some level unique in the history of the universe, unlike a mathematical expression such as 5 x 4 = 20, which is never any different than 5 x 4 = 20, regardless of the context.
I think that when we assert a universe of recursive recombinations that know no true originality, we should not disregard the fact that this strongly contradicts our intuitions about the proprietary nature of identity. A generic universe would seem to counterfactually predict a very low interest in qualities such as individuality and originality, and identification with trivial personal preferences. Of course, what we see the precise opposite, as all celebrity it propelled by some suggestion unrepeatability and the fine tuning of lifestyle choices is arguably the most prolific and successful feature of consumerism.
If the experienced universe were strictly an outcropping of a machine that by definition can create only trivially ‘new’ combinations of copies, why would those kinds of quantitatively recombined differences such as that between 456098209093457976534 and 45609420909345797353 seem insignificant to us, but the difference between a belt worn by Elvis and a copy of that belt to be demonstrably significant to many people?
C. Complexity Driven Novelty
Because computationalism assumes finite simplicity, that is, it provides only a pseudo-uniqueness by virtue of the relatively low statistical probability of large numbers overlapping each other precisely. There is no irreducible originality to the original Mona Lisa, only the vastness of the physical painting’s microstructure prevents it from being exactly reproduced very easily. Such a perfect reproduction, under computationalism is indistinguishable from the original and therefore neither can be more original than the other (or if there are unavoidable differences due to uncertainty and incompleteness, they would be noise differences which we would be of no consequence).
This is where information theory departs from realism, since reality provides memories and evidence of which Mona Lisa is new and which one was painted by Leonardo da Vinci at the beginning of the 16th century in Florence, Italy, Earth, Sol, Milky Way Galaxy*.
Mathematics can be said to allow for the possibility of novelty only in one direction; that of higher complexity. New qualities, by computationalism, must arise on the event horizons of something like the Universal Dovetailer. If that is the case, it seems odd that the language of qualia is one of rich simplicity rather than cumbersome computables. With comp, there can be no new ‘one’, but in reality, every human experience is exactly that – a new day, a new experience, even if it often seems much like the one before. Numbers don’t work that way. Each mechanical result is identical. A = A. A does not ‘seem much like the A before, yet in a new way‘. This is a huge problem with mathematics and theoretical physics. They don’t get the connection between novelty and simplicity, so they hope to find it out in the vastness of super-human complexity.
II. Computation as Puppetry
I think that even David Chalmers, who I respect immensely for his contributions to philosophy of mind and in communicating the Hard Problem missed the a subtle but important distinction. The difference between a puppet and a zombie, while superficially innocuous, has profound implications for the formulation of a realistic critique of Strong AI. When Chalmers introduced or popularized the term zombie in reference to hypothetical perfect human duplicates which lack qualia and subjective experience, he inadvertently let an unscientific assumption leak in.
A zombie is supernatural because it implies the presence of an absence. It is an animated, un-dead cadaver in which a living person is no longer present. The unconsciousness of a puppet, however, is merely tautological – it is the natural absence of presence of consciousness which is the case with any symbolic representation of a character, such as a doll, cartoon, or emoticon. A symbolic representation, such as Bugs Bunny, can be mass produced using any suitable material substance or communication media. Even though Bugs is treated as a unique intellectual property, in reality, the title to that property is not unique and can be transferred, sold, shared, etc.
The reason that Intellectual Property law is such a problem is because anyone can take some ordinary piece of junk, put a Bugs Bunny picture on it, and sell more of it than they would have otherwise. Bugs can’t object to having his good name sullied by hack counterfeiters, so the image of Bugs Bunny is used both to falsely endorse an inferior product and to falsely impugn the reputation of a brand. The problem is, any reasonable facsimile of Bugs Bunny is just as authentic, in an Absolute sense, as any other. The only true original Bugs Bunny is the one we experience through our imagination and the imagination of Mel Blanc and the Looney Tunes animators.
The impulse to reify the legitimacy of intellectual property into law is related to the impulse to project agency and awareness onto machines. As a branch of the “pathetic fallacy” which takes literally those human qualities which have been applied to non-humans as figurative conveniences of language, the computationalistic fallacy projects an assumed character-hood on the machine as a whole. Reasoning (falsely, I think) that since all that our body can see of ourselves is a body, it is the body which is the original object from which the subject is produced through its functions. Such a conclusion, when we begin from mechanism, seems unavoidable at first.
III. Hypothesis
I propose that we reverse the two assumptions of mathematics above, so that
- Recursion is assumed to be derived from primordial spontaneity rather than the other way around.
- Novelty can only be meaningful if it re-asserts simplicity in addition to complexity.This would mean:
- The expanding event horizon of the Universal Dovetailer would have to be composed of recordings of sensed experiences after the fact, rather than precursors to subjective simulation of the computation.
- Comp is untrue by virtue of diagonalization of immeasurable novelty against incompleteness.
- Sense out-incompletes arithmetic truth, and therefore leaves it frozen in stasis by comparison in every instant, and in eternity.
- Computation cannot animate anything except through the gullibility of the pathetic fallacy.
This may seem like an unfair or insulting to the many great minds who have been pioneering AI theory and development, but that is not my intent. By assertively pointing out the need to move from a model of consciousness which hinges on simulated spontaneity to a model in which spontaneity can never, by definition be simulated, I am trying to express the importance and urgency of this shift. If I am right, the future of human understanding depends ultimately on our ability to graduate from the cul-de-sac of mechanistic supremacy to the more profound truth of rehabilitated animism. Feeling does compute because computation is how the masking of feeling into a localized unfeeling becomes possible.
IV. Reversing the Dovetailer
By uncovering the intrinsic antagonism between the above mathematical assumptions and the authentic nature of consciousness, it might be possible to ascertain a truer model of consciousness by reversing the order of the Universal Dovetailer (machine that builds the multiverse out of programs).
- The universality of recursive cardinality reverses as the Diagonalization of the Unique
- Complexity driven novelty can be reversed by Pushing the UD.
A. Diagonalization of the Unique
Under the hypothesis that computation lags behind experience*, no simulation of a brain can ever catch up to what a natural person can feel through that brain, since the natural person is constantly consuming the uniqueness of their experience before it can be measured by anything else. Since the uniqueness of subjectivity is immeasurable and unprecedented within its own inertial frame, no instrument from outside of that frame can capture it before it decoheres into cascades of increasingly generic public reflections.
PIP flips the presumption of Universal Recursive Cardinality inherent in mathematics so that all novelty exists as truly original simplicity, as well as a relatively new complex recombination, such that the continuum of novelty extends in both directions. This, if properly understood, should be a lightning bolt that recontextualizes the whole of mathematics. It is like discovering a new kind of negative number. Things like color and human feeling may exploit the addressing scheme that complex computation offers, but the important part of color or feeling is not in that address, but in the hyper-simplicity and absolute novelty that ‘now’ corresponds to that address. The incardinality of sense means that all feelings are more primitive than even the number one or the concept of singularity. They are rooted in the eternal ‘becoming of one’; before and after cardinality. Under PIP, computation is a public repetition of what is irreducibly unrepeatable and private. Computation can never get ahead of experience, because computation is an a posteriori measurement of it.
For example, a computer model of what an athlete will do on the field that is based on their past performance will always fail to account for the possibility that the next performance will be the first time that athlete does something that they never have done before and that they could not have done before. Natural identities (not characters, puppets, etc) are not only self-diagonalizing, natural identity itself is self-diagonalization. We are that which has not yet experienced the totality of its lifetime, and that incompleteness infuses our entire experience. The emergence of the unique always cheats prediction, since all prediction belongs to the measurements of an expired world which did not yet contain the next novelty.
B. Pushing the UD – If the UD is a program which pulls the experienced universe behind it as it extends, the computed realm, faster than light, ahead of local appearances. It assumes all phenomena are built bottom up from generic, interchangeable bits. The hypothesis under PIP is that if there were a UD, it would be pushed by experience from the top down, as well as recollecting fragments of previous experiences from the bottom up. Each experience decays from immeasurable private qualia that is unique into public reflections that are generic recombinations of fixed elements. Reversing the Dovetailer puts universality on the defense so that it becomes a storage device rather than a pseudo-primitive mechina ex deus.
The primacy of sense is corroborated by the intuition that every measure requires a ruler. Some example which is presented as an index for comparison. The uniqueness comes first, and the computability follows by imitation. The un-numbered Great War becomes World War II only in retrospect. The second war does not follow the rule of world wars, it creates the rule by virtue of its similarities. The second war is unprecedented in its own right, as an original second world war, but unlike the number two, it is not literally another World War I. In short, experiences do not follow from rules; rules follow from experience.
V. Conclusions
If we extrapolate the assumptions of Compuationalism out, I think that they would predict that the painting of the Mona Lisa is what always happens under the mathematical conditions posed by a combination of celestial motions, cells, bodies, brains, etc. There can be no truly original artwork, as all art works are inevitable under some computable probability, even if the the particular work is not predictable specifically by computation. Comp makes all originals derivatives of duplication. I suggest that it makes more sense that the primordial identity of sense experience is a fundamental originality from which duplication is derived. The number one is a generic copy – a one-ness which comments on an aspect of what is ultimately boundaryless inclusion rather than naming originality itself.
Under Multisense Realism (MSR), the sense-first view ultimately makes the most sense but it allows that the counter perspective, in which sense follows computation or physics, would appear to be true in another way, one which yields meaningful insights that could not be accessed otherwise.
When we shift our attention from the figure of comp in the background of sense to the figure of sense in the background of comp, the relation of originality shifts also. With sense first, true originality makes all computations into imposters. With computation first, arithmetic truth makes local appearances of originality artifacts of machine self-reference. Both are trivially true, but if the comp-first view were Absolutely true, there would be no plausible justification for such appearances of originality as qualitatively significant. A copy and an original should have no greater difference than a fifteenth copy and a sixteenth copy, and being the first person to discover America should have no more import than being the 1,588,237th person to discover America. The title of this post as 2013/10/13/2562 would be as good of a title as any other referenceable string.
*This is not to suggest that human experience lags behind neurological computation. MSR proposes a model called eigenmorphism to clarify the personal/sub-personal distinction in which neurological-level computation corresponds to sub-personal experience rather than personal level experience. This explains the disappearance of free will in neuroscientific experiments such as Libet, et. al. Human personhood is a simple but deep. Simultaneity is relative, and nowhere is that more true than along the continuum between the microphysical and the macrophenomenal. What can be experimented on publicly is, under MSR, a combination of near isomorphic and near contra-isomorphic to private experience.
Wittgenstein in Wonderland, Einstein under Glass
If I understand the idea correctly – that is, if there is enough of the idea which is not private to Ludwig Wittgenstein that it can be understood by anyone in general or myself in particular, then I think that he may have mistaken the concrete nature of experienced privacy for an abstract concept of isolation. From Philosophical Investigations:
The words of this language are to refer to what can be known only to the speaker; to his immediate, private, sensations. So another cannot understand the language. – http://plato.stanford.edu/entries/private-language/
To begin with, craniopagus (brain conjoined) twins, do actually share sensations that we would consider private.
The results of the test did not surprise the family, who had long suspected that even when one girl’s vision was angled away from the television, she was laughing at the images flashing in front of her sister’s eyes. The sensory exchange, they believe, extends to the girls’ taste buds: Krista likes ketchup, and Tatiana does not, something the family discovered when Tatiana tried to scrape the condiment off her own tongue, even when she was not eating it.
There should be no reason that it would not be technologically feasible to eventually export the connectivity which craniopagus twins experience through some kind of neural implant or neuroelectric multiplier. There are already computers that can be controlled directly through the brain.
Brain-computer interfaces that monitor brainwaves through EEG have already made their way to the market. NeuroSky’s headset uses EEG readings as well as electromyography to pick up signals about a person’s level of concentration to control toys and games (see “Next-Generation Toys Read Brain Waves, May Help Kids Focus”). Emotiv Systems sells a headset that reads EEG and facial expression to enhance the experience of gaming (see “Mind-Reading Game Controller”).
All that would be required in principle would be to reverse the technology to make them run in the receiving direction (computer>brain) and then imitate the kinds of neural connections which brain conjoined twins have that allow them to share sensations. The neural connections themselves would not be aware of anything on a human level, so it would not need to be public in the sense that sensations would be available without the benefit of a living human brain, only that the awareness could, to some extent, incite a version of itself in an experientially merged environment.
Because of the success and precision of science has extended our knowledge so far beyond our native instruments, sometimes contradicting them successfully, we tend to believe that the view that diagnostic technology provides is superior to, or serves as a replacement for our own awareness. While it is true that our own experience cannot reveal the same kinds of things that an fMRI or EEG can, I see that as a small detail compared to the wealth of value that our own awareness provides about the brain, the body, and the worlds we live in. Natural awareness is the ultimate diagnostic technology. Even though we can certainly benefit from a view outside of our own, there’s really no good reason to assume that what we feel, think, and experience isn’t a deeper level of insight into the nature of biochemical physics than we could possibly gain otherwise. We are evidence that physics does something besides collide particles in a void. Our experience is richer, smarter, and more empirically factual than what an instrument outside of our body can generate on its own. The problem is that our experience is so rich and so convoluted with private, proprietary knots, that we can’t share very much of it. We, and the universe, are made of private language. It is the public reduction of privacy which is temporary and localized…it’s just localized as a lowest common denominator.
While It is true that at this stage in our technical development, subjective experience can only be reported in a way which is limited by local social skills, there is no need to invoke a permanent ban on the future of communication and trans-private experience. Instead of trying to report on a subjective experience, it could be possible to share that experience through a neurological interface – or at least to exchange some empathic connection that would go farther than public communication.
If I had some psychedelic experience which allowed me to see a new primary color, I can’t communicate that publicly. If I can just put on a device that allows our brains to connect, then someone else might be able to share the memory of what that looked like.
It seems to me that Wittgenstein’s private language argument (sacrosanct as it seems to be among the philosophically inclined) assumes privacy as identical to isolation, rather than the primordial identity pansensitivty which I think it could be. If privacy is accomplished as I suggest, by the spatiotemporal ‘masking’ of eternity, than any experience that can be had is not a nonsense language to be ‘passed over in silence’, but rather a personally articulated fragment of the Totality. Language is only communication – intellectual measurement for sharing public-facing expressions. What we share privately is transmeasurable and inherently permeable to the Totality beneath the threshold of intellect.
Said another way, everything that we can experience is already shared by billions of neurons. Adding someone else’s neurons to that group should indeed be only a matter of building a synchronization technology. If, for instance, brain conjoined twins have some experience that nobody else has (like being the first brain conjoined twins to survive to age 40 or something), then they already share that experience, so it would no longer be a ‘private language’. The true future of AI may not be in simulating awareness as information, but in using information to share awareness. Certainly the success of social networking and MMPGs has shown us that what we really want out of computers is not for them to be us, but for us to be with each other in worlds we create.
I propose that rather than beginning from the position of awareness being a simulation to represent a reality that is senseless and unconscious, we should try assuming that awareness itself is the undoubtable absolute. I would guess that each kind of awareness already understands itself far better than we understand math or physics, it is only the vastness of human experience which prevents that understanding to be shared on all levels of itself, all of the time.
The way to understand consciousness would not be to reduce it to a public language of physics and math, since our understanding of our public experience is itself robotic and approximated by multiple filters of measurement. To get at the nature of qualia and quanta requires stripping down the whole of nature to Absolute fundamentals – beyond language and beyond measurement. We must question sense itself, and we must rehabilitate our worldview so that we ourselves can live inside of it. We should seek the transmeasurable nature of ourselves, not just the cells of our brain or the behavioral games that we have evolved as one particular species in the world. The toy model of consciousness provided by logical positivism and structural realism is, in my opinion, a good start, but in the wrong direction – a necessary detour which is uniquely (privately?) appropriate to a particular phase of modernism. To progress beyond that I think requires making the greatest cosmological 180 since Galileo. Einstein had it right, but he did not generalize relativity far enough. His view was so advanced in the spatialization of time and light that he reduced awareness to a one dimensional vector. What I think he missed, is that if we begin with sensitivity, then light becomes a capacity with which to modulate insensitivity – which is exactly what we see when we share light across more than one slit – a modulation of masked sensitivity shared by matter independently of spacetime.
Jesse Prinz -On the (Dis)unity of Consciousness
Jesse Prinz gives a well developed perspective on neuronal synchronization as the correlate to attention and explores the question of binding. As always, neuroscience offers important details and clues for us to guide our understanding, however, knowledge alone may not be the pure and unbiased resource that we presume it to be. The assumptions that we make about a world in which we have already defined consciousness to be the behavior of neurons are not neutral. They direct and some cases self-validate the approach as much as any cognitive bias could. For those who watch the video, here are my comments:
To begin with, aren’t unity and disunity qualitative discernments within consciousness? To me, the binding problem is most likely generated from the assumption that consciousness arises a posteriori of distinctions like part-whole, when in fact, awareness may be identical to the capacity for any distinction at all, and is therefore outside of any notion of ‘it-ness’, ‘unity’, or multiplicity. To me, it is clear that consciousness is unified, not-unified, both unified and not unified, and neither unified nor not unified. If we call consciousness ‘attention’, what should we call our awareness of the periphery of our awareness – of memories and intuitions?
The assumption that needs to be questioned is that sub-conscious awareness is different from consciousness in some material way. Our awareness of our awareness is of course limited, but that doesn’t mean that low level ‘processing’ is not also private experience in its own right.
Pointing to synchronization of neuronal activity as causing attention just pushes the hard problem down to a microphenomenal level. In order to synchronize with each other, neurons themselves would ostensibly have to be aware and pay attention to each other in some way.
Synchrony may not be the cause, but the symptom. Experience is stepped down from the top and up from the bottom in the same way that I am using codes of letters to make words which together communicate my top-down ideas. Neurons are the brain’s ‘alphabet’, they are not the author of consciousness, they are not sufficient for consciousness, but they are necessary for a human quality of consciousness. (In my opinion).
Later on, when he covers the idea of Primitive Unity, he dismisses holistic awareness on the basis of separate areas of the brain contribute separate information, but that is based on an expectation that the brain is the cause of awareness rather than the event horizon of privacy as it becomes public (and vice versa) on many levels and scales. The whole idea of ‘building whole experiences’ from atomistic parts assumes holism as a possibility, even as it seeks to deny that possibility. How can a whole experience be built without an expectation of wholes?
Attention is not what consciousness is, it is what consciousness does. In order for attention to exist, there must first be the capacity to receive sensation and to appreciate that sensation qualitatively. Only then, when we have something to pay attention to, can be find our capacity to participate actively in what we perceive.
As far as the refrigerator light idea goes, I think that is a good line of thought to explore with consciousness as I think it should lead to a questioning not only of the constancy of the light, but of the darkness as well. We cannot assume that either the naive state of light on or the sophisticated state of light on with door open/off when closed is more real than the other. Instead, each view only reflects the perspective which is getting the attention. When we look at consciousness from the point of view of a brain, we can only find explanations which break consciousness apart into subconscious and impersonal operations. It is a confirmation bias of a different sort which is never considered.
Diogenes Revenge: Cynicism, Semiotics, and the Evaporating Standard
Diogenes was called Kynos — Greek for dog — for his lifestyle and contrariness. It was from this word for dog that we get the word Cynic.
Diogenes is also said to have worked minting coins with his father until he was 60, but was then exiled for debasing the coinage. – source
In comparing the semiotics of CS Pierce and Jean Baudrillard, two related themes emerge concerning the nature of signs. Pierce famously used trichotomy arrangements to describe the relations, while Baudrillard talked about four stages of simulation, each more removed from authenticity. In Pierce’s formulation, Index, Icon, and Symbol work as separate strategies for encoding meaning. An index is a direct consequence or indication of some reality. An icon is a likeness of some reality. A symbol is a code which has its meaning assigned intentionally.
Baudrillard saw sign as a succession of adulterations – first in which an original reality is copied, then when the copy masks the original in some way, third, as a denatured copy in which the debasement has been masked, and fourth as a pure simulacra; a copy with no original, composed only of signs reflecting each other.
Whether we use three categories or four stages, or some other number of partitions along a continuum, an overall pattern can be arranged which suggests a logarithmic evaporation, an evolution from the authentic and local to the generic and universal. Korzybski’s map and territory distinction fits in here too, as human efforts to automate nature result in maps, maps of maps, and maps of all possible mapping.
The history of human timekeeping reveals the earthy roots of time as a social construct based on physical norms. Timekeeping was, from the beginning linked with government and control of resources.
According to Callisthenes, the Persians were using water clocks in 328 BC to ensure a just and exact distribution of water from qanats to their shareholders for agricultural irrigation. The use of water clocks in Iran, especially in Zeebad, dates back to 500BC. Later they were also used to determine the exact holy days of pre-Islamic religions, such as the Nowruz, Chelah, or Yalda- – the shortest, longest, and equal-length days and nights of the years. The water clocks used in Iran were one of the most practical ancient tools for timing the yearly calendar. source
Anything which burns or flows at a steady rate can be used as a clock. Oil lamps, candles, can incense have been used as clocks, as well as the more familiar sand hourglass, shadow clocks, and clepsydrae (water clocks). During the day, a simple stick in the ground can provide an index of the sun’s position. These kinds of clocks, in which the nature of physics is accessed directly would correspond to Baudrillard’s first level of simulation – they are faithful copies of the sun’s movement, or of the depletion of some material condition.
Staying within this same agricultural era of civilization, we can understand the birth of currency in the same way. Trading of everyday commodities could be indexed with concentrated physical commodities like livestock, and also other objects like shells which had intrinsic value for being attractive and uncommon, as well as secondary value for being durable and portable objects to trade. In the same way that coins came to replace shells, mechanical clocks and watches came to replace physical index clocks. The notions of time and money, while different in that time refers to a commodity beyond the scope of human control and money referring specifically to human control, both serve as regulatory standards for civilization, as well as equivalents for each other in many instances (‘man hours’, productivity).
In the next phase of simulation, coins combined the intrinsic and secondary values of things like shells with a mint mark to ensure transactional viability on the token. The icon of money, as Diogenes discovered, can be extended much further than the index, as anything that bears the official seal will be taken as money, regardless of the actual metal content of the coin. The idea of bank notes was as a promise to pay the bearer a sum of coins. In the world of time measurement, the production of clocks, clocktowers, and watches spread the clock face icon around the world, each one synchronized to a local, and eventually a coordinated universal time. Industrial workers were divided into shifts, with each crew punching a timeclock to verify their hours at work and breaks. While the nature of time makes counterfeiting a different kind of prospect, the practice of having others clock out for you or having a cab driver take the long way around to run the meter longer are ways that the iconic nature of the mechanical clock can be exploited. Being one step removed from the physical reality, iconic technologies provide an early opportunity for ‘hacking’.
| physical territory > index | local map > icon | symbol > universal map |
| water clock, sand clock | sundial/clock face | digital timecode |
| trade > shells | coins > check > paper | plastic > digital > virtual |
| production > organization | bonds > stock | futures > derivatives |
| real estate | mortgage, rent | speculation > derivatives |
| genuine aesthetic | imitation synthetic | artificial emulation |
| non-verbal communication | language | data |
The last three decades have been marked by the rise of the digital economy. Paper money and coins have largely been replaced by plastic cards connected to electronic accounts, which have in turn entered the final stage of simulacra – a pure digital encoding. The promissory note iconography and the physical indexicality of wealth have been stripped away, leaving behind a residue of immediate abstraction. The transaction is not a promise, it is instantaneous. It is not wealth, it is only a license to obtain wealth from the coordinated universal system.
Time has entered its symbolic phase as well. The first exposure to computers that consumers had in the 1970s was in the form of digital watches and calculators. Time and money. First LED, and then LCD displays became available, both in expensive and inexpensive versions. For a whole generation of kids, their first electronic devices were digital calculators and watches. There had been digital clocks before, based on turning wheels or flipping tiles, but the difference here was that the electronic numbers did not look like regular numbers. Nobody had ever seen numbers rendered as these kind of generic combinatorial figures before. Every kid quickly learned how to spell out words by turning the numbers upside down (you couldn’t make much.. 710 77345 spells ShELL OIL)…sort of like emoticons.
Beneath the surface however, something had changed. The digital readout was not even real numbers, they were icons of numbers, and icons which exposed the mechanics of their iconography. Each number was only a combinatorial pattern of binary segments – a specific fraction of the full 8.8.8.8.8.8.8.8. pattern. You could even see the faint outlines of the complete pattern of 8’s if you looked closely, both in LED and LCD. The semiotic process had moved one step closer to the technological and away from the consumer. Making sense of these patterns as numbers was now part of your job, and the language of Arabic numerals became data to be processed.
Since that time, the digital revolution has shaped the making and breaking of world markets. Each financial bubble spread out, Diogenes style, through the banking and finance industry behind a tide of abstraction. Ultra-fast trading which leverages meaningless shifts in transaction patterns has become the new standard, replacing traditional market analysis. From leveraged buyouts in the 1980s to junk bonds, tech IPOs, Credit Default Swaps, and the rest, the world economy is no longer an index or icon of wealth, it is a symbol which refers only to itself.
The advent of 3D printing marks the opposite trend. Where conventional computer printing to allow consumers to generate their own 2D icons from machines running on symbols, the new wave of micro-fabrication technology extend that beyond the icon and the index level. Parts, devices, food, even living tissue can be extruded from symbol directly into material reality. Perhaps this is a fifth level of simulation – the copy with no original which replaces the need for the original…a trophy in Diogenes’ honor.
Recent Comments