Archive

Posts Tagged ‘computers’

This Is Not A Pipe (But it is a triangle)

October 25, 2017 Leave a comment

rgb-triangle

The physical semiconductor mechanism which computes this triangle contains nothing triangular.

The neurological tissue and the patterns of their chemical excitation which correlate with the experience of seeing a triangle also contain nothing triangular.

The triangle exists because there is a visible phenomenon presented. The visible phenomenon is not a concrete physical object, nor is it an abstract mathematical concept.

The common sense which unites the semiconductor, the neuron, the triangle, and the mathematical/geometric model of triangularity is not a sense which can be physically located in public space, nor can it be logically demonstrated as a proof within the intellect. The common sense which underlies them all is the aesthetic presentation itself.

Advertisements

Dereference Theory of Consciousness

October 24, 2016 Leave a comment

Draft 1.1

IMO machine consciousness will ultimately prove to be an oxymoron, but if we did want to look for consciousness analogs in machine behavior, here is an idea that occurred to me recently:

Look for nested dereferencing, i.e. places where persistent information processing structures load real-time sense data about the loading of all-time sense data.

image

At the intersection of philosophy of mind, computer science, and quantum mechanics is the problem of instantiating awareness. What follows is an attempt to provide a deeper understanding of the significance of dereferencing and how it applies to integration of information and the quantum measurement problem. This is a broad conjecture about the nature of sensation as it pertains to the function of larger information processes, with an eye toward defining and identifying specific neuroscientific or cognitive signatures to correlate with conscious activity.

A dereference event can be thought of as the precise point in which a system that is designed or evolved to expect a range of inputs receives the real input itself. This sentence, for example, invites an expectation of a terminal clause, the terms of which are expected to be English words which are semantically linked to the rest of the sentence. English grammar provides templates of possible communication, but the actual communication relies on specific content to fill in those forms. The ability to parse English communication can be simulated by unconscious, rule-based mechanisms, however, I suggest that the ability to understand that communication involves a rule-breaking replacement of an existing parse theory with empirical, semantic fact. The structure of a language, its dictionary etc, is a reference body for timeless logic structures. Its purpose is to enable a channel for sending, receiving, and modifying messages which pertain to dereferenced events in real time. It is through the contact with real time sense events that communication channels can develop in the first place, and to continue to self modify.

What is proposed here is an alternative to Multi-World Interpretation of Quantum Wave Function collapse – an inversion of the fundamental assumption in which all possible phenomena diverge or diffract within a single context of concretely sensed events. The wave function collapse in this view is not the result of a measurement of what is already objectively ‘out there’, nor is it the creation of objective reality by subjective experiences ‘in here’, but a quantized return to an increasingly ‘re-contextualized’ state. A perpetual and unpredictable re-acquaintance with unpredictable re-acquaintance.

In programmatic terms, the variable *p is dereferenced to the concrete value (*p = “the current temperature”, dereferenced p = “is now 78 degrees Fahrenheit″). To get to a better model of conscious experience (and I think that this this plugs into Orch OR, IIT, Interface Theory, and Global Workspace), we should look at the nested or double dereferencing operation. The dereferencing of dereferencing (**p) is functionally identical to awareness of awareness or perception of perception. In the *p = “the current temperature” example, **p is “the current check of the current check of the temperature”. This not only points us to the familiar Strange loop models of consciousness, but extends the loop outward to the environment and the environment into the loop. Checking the environment of the environmental check is a gateway to veridical perception. The loop modifies its own capacity for self-modification.

Disclaimer: This description of consciousness as meta-dereferencing is intended as a metaphor only. In my view, information processing cannot generate conscious experience, however, conscious experience can possibly be better traced by studying dereferencing functions. This view differs from Gödel sentences or strange loops in that those structures refer to reference (This sentence is true) while the dereference loop specifically points away from pointers, rules, programs, formal systems, etc and toward i/o conditioning of i/o conditions. This would be a way for information-theoretic principles to escape the nonlocality of superposition and access an inflection point for authentic realization (in public space-time as shared experience). “Bing” = **Φ. In other words, by dereferencing dereference, the potentially concrete is made truly concrete. Sense experience is embodied as stereo-morphic tangible realism and tangible realism is disembodied as a sense of fact-gathering about sensed fact-gatherings.

Dereference theory is an appeal to anti-simulation ontology. Because this description of cognition implicates nested input/output operations across physically or qualitatively *real* events, the subjective result is a reduced set of real sense conditions rather than confabulated, solipsistic phenomenology. The subjective sense condition does not refer only to private or generic labels within a feed-forward thinking mechanism, but also to a model-free foundation and genuine sensitivity of the local ‘hardware’ to external conditions in the public here-and-now. This sensitivity can be conceived initially as universal property of some or all physical substrates (material panpsychism), however, I think that it is vital to progress beyond this assumption toward a nondual fundamental awareness view. In other words, subjective consciousness is derived from a dereference of a local inertial frame of expectation to a universal inertial frame of unprecedented novelties which decay as repetition. Each event is instantiated as an eternally unique experience, but propagated as repetitive/normalized translations in every other frame of experience. That propagation is the foundation for causality and entropy.

Could the Internet come to life?

December 20, 2014 Leave a comment

Could the Internet come to life?

It sounds like a silly proposition and it is a little tongue in cheek, but trying to come up with an answer could have ramifications for the sciences of consciousness and sentience.

Years ago cosmologist Paul Davies talked about a theory that organic matter and therefore life was intrinsically no different from inorganic matter – the only difference was the amount of complexity.

So when a system gets sufficiently complex enough, the property we know (but still can’t define) as ‘life’ might emerge spontaneously like it did from amino acids and proteins three billion years ago.

We have such a system today in the internet. As far back as 2005 Kevin Kelly talked about how the internet would soon have as many ‘nodes’ as a human brain. It’s even been written about in fiction in Robert J Sawyer’s Wake series (Page on sfwriter.com)

And since human consciousness and all the deep abstract knowledge, creativity, love, etc it gives us arises from a staggering number of deceptively simple parts, couldn’t the same thing happen to the internet (or another sufficiently large and complex system)?

I’m trying to crowdsource a series of articles on the topic (Could the Internet come to life? by Drew Turney – Beacon) and I know this isn’t the place to advertise, but even though I’d love everyone who reads this to back me I’m more interested in getting more food for thought from any responses should I get the project off the ground

I think that the responses here are going to tend toward supporting one of two worldviews. In the first worldview, the facts of physics and information science lead us inevitably to conclude that consciousness and life are purely a matter of particular configurations of forms and functions. Whether those forms and functions are strictly tied to specific materials or they are substrate independent and therefore purely logical entities is another tier of the debate, but all those who subscribe to the first worldview are in agreement: If a particular set of functions are instantiated, the result will be life and conscious experience.

The second worldview would include all of those who suspect that there is something more than that which is required…that information or physics may be necessary for life, but not sufficient. That worldview can be divided further into those who think that the other factor is spiritual or supernatural, and those who think that it is an as-yet-undiscovered factor. Those in the first worldview camp might assert that the second worldview is unlikely or impossible because of

1) Causal Closure eliminates non-physical causes of physical phenomena
2) Bell’s Theorem eliminates hidden variables (including vital essences)
3) Church-Turing Thesis supports the universality of computation

1) Causal Closure – The idea that all physical effects have physical causes can either be seen as an iron clad law of the universe, or as a tautological fallacy that begs the question of materialism. On the one hand, adherents to the first worldview can say that if there were any non-physical cause to a physical effect, we would by definition see the effect of that cause as physical. There is simply no room in the laws of physics for magical, non-local forces as the tiniest deviation in experimental data would show up for us as a paradigm shifting event in the history of physics.

On the other hand, adherents of the second view can either point to a theological transcendence of physics which is miraculous and is beyond physical explanation, or they can question the suppositions of causal closure as biased from the start. Since all physical measurements are made using physical instruments, any metaphysical contact might be minimized or eliminated.

It could be argued that physics is like wearing colored glasses, so that rather than proving that all phenomena can be reduced to ‘red images’, all that it proves is that working with the public-facing exteriors of nature yields a predictably public-facing exterior logic. Rather than diminishing the significance of private-facing phenomenal experience, it may be physics which is the diminished ‘tip of the iceberg’, with the remaining bulk of the iceberg being a transphysical, transpersonal firmament. Just as we observe the ability of our own senses to ‘fill-in’ gaps in perceptual continuity, it could be that physics has a similar plasticity. Relativity may extend beyond physics, such that physics itself is a curvature of deeper conscious/metaphysical attractors.

Another alternative to assuming causal closure is to see the different levels of description of physics as semi-permeable to causality. Our bodies are made of living cells, but on that layer of description ‘we’ don’t exist. A TV show doesn’t ‘exist’ on the level of illuminated pixels or digital data in a TV set. Each level of description is defined by a scope and scale of perception which is only meaningful on that scale. If we apply strong causal closure, there would be no room for any such thing as a level of description or conscious perspective. Physics has no observers, unless we smuggle them in as unacknowledged voyeurs from our own non-physically-accounted-for experience.

To my mind, it’s difficult to defend causal closure in light of recent changes in astrophysics where the vast bulk of the universe’s mass has been suddenly re-categorized as dark energy and dark matter. Not only could these newly minted phenomena be ‘dark’ because they are metaphysical, but they show that physics cannot be counted on to limit itself to any particular definition of what counts as physics.

2) Here’s a passage about Bell’s Theorem which says it better than I could:

“Bell’s Theorem, expressed in a simple equation called an ‘inequality’, could  be put to a direct test. It is a reflection of the fact that no signal containing any information can travel faster than the speed of light. This means that if hidden-variables theory exists to make quantum mechanics a  deterministic theory, the information contained in these ‘variables’ cannot be transmitted faster than light. This is what physicists call a  ‘local’ theory. John Bell discovered that, in order for Bohm’s hidden-variable theory to work, it would have to be very badly ‘non-local’ meaning that it would have to allow for information to travel faster then the speed of light. This means that, if we accept hidden-variable theory to clean up quantum  mechanics because we have decided that we no longer like the idea of assigning probabilities to events at the atomic scale,  we would have to give up special relativity. This is an unsatisfactory bargain.” Archive of Astronomy Questions and Answers


From an other article ( Physics: Bell’s theorem still reverberates )

As Bell proved in 1964, this leaves two options for the nature of reality. The first is that reality is irreducibly random, meaning that there are no hidden variables that “determine the results of individual measurements”. The second option is that reality is ‘non-local’, meaning that “the setting of one measuring device can influence the reading of another instrument, however remote”.


Bell’s inequality could go either way then. Nature could be random and local, non-local and physical, or non-local and metaphysical…or perhaps all of the above. We don’t have to conceive of ‘vital essences’ in the sense of dark physics that connects our private will to public matter and energy, but we can see instead that physics is a masked or spatiotemporally diffracted reflection of a nature that is not only trans-physical, but perhaps trans-dimensional and trans-ontological. It may be that beneath every fact is a kind of fiction.

If particles are, as Fritjof Capra said “tendencies to exist”, then the ground of being may be conceived of as a ‘pretend’-ency to exist. This makes sense to me, since we experience with our own imagination a constant stream of interior rehearsals for futures that might never be and histories that probably didn’t happen the way that we think. Rather than thinking of our own intellect as purely a vastly complex system on a biochemical scale, we may also think of it as a vastly simple non-system, like a monad, of awareness which is primordial and fundamentally inseparable from the universe as a whole.

3) Church-Turing Thesis has to do with computability and whether all functions of mathematics can be broken down to simple arithmetic operations. If we accept it as true, then it can be reasoned through the first worldview that since the brain is physical, and physics can be modeled mathematically, then there should be no reason why a brain cannot be simulated as a computer program.

There are some possible problems with this:

a) The brain and its behavior may not be physically complete. There are a lot of theories about consciousness and the brain. Penrose and Hameroff’s quantum consciousness postulates that consciousness depends on quantum computations within cytoskeletal structures called microtubules. In that case, what the brain does may not be entirely physically accessible. According to Orch OR, the brain’s behavior can be caused ultimately by quantum wavefunction collapse through large scale Orchestrated Objective Reductions. Quantum events of this sort could not be reproduced or measured before they happen, so there is no reason to expect that a computer modeling of a brain would work.

b) Consciousness may not be computable. Like Bell’s work in quantum mechanics, mathematics took an enigmatic turn with Gödel’s Incompleteness Theorem. Long story short, Gödel showed that there are truths within any axiomatic system which cannot be proved without reaching outside of that system. Formal logic is incomplete. Like Bell’s inequality, incompleteness can take us into a world where either epistemology breaks down completely and we have no way of ever knowing whether what we know is true, or we are compelled to consider that logic itself is dependent upon a more transcendent, Platonic realm of arithmetic truth.

This leads to another question about whether even this kind of super-logical truth is the generator of consciousness or whether consciousness of some sort is required a priori to any formulation of ‘truth’. To me, it makes no sense for there to be truths which are undetectable, and it makes no sense for an undetectable truth to develop sensation to detect itself, so I’m convinced that arithmetic truth is a reduction of the deeper ground of being, which is not only logical and generic, but aesthetic and proprietary. Thinking is a form of feeling, rather than the other way around. No arithmetic code can produce a feeling on its own.

c) Computation may not support awareness. Those who are used to the first worldview may find this prospect to be objectionable, even offensive to their sensibilities. This in itself is an interesting response to something which is supposed to be scientific and unsentimental, but that is another topic. Sort of. What is at stake here is the sanctity of simulation. The idea that anything which can be substituted with sufficiently high resolution is functionally identical to the original is at the heart of the modern technological worldview. If you have a good enough cochlear implant, it is thought, of course it would be ‘the same as’ a biological ear. By extension, however, that reasoning would imply that a good enough simulation of glass of water would be drinkable.

It seems obvious that no computer generated image of water would be drinkable, but some would say that it would be drinkable if you yourself also existed in that simulation. Of course, if that were the case, anything could be drinkable, including the sky, the alphabet, etc, whatever was programmed to be drinkable in that sim-world.

We should ask then, since computational physics is so loose and ‘real’ physics is so rigidly constrained, does that mean that physics and computation are a substance dualism where they cannot directly interact, or does it mean that physics is subsumed within computation, so that our world is only one of a set of many others, or every other possible world (as in some MWI theories).

d) Computation may rely on ungrounded symbols. Another topic that gets a lot of people very irritated is the line of philosophical questioning that includes Searle’s Chinese Room and Leibniz Mill Argument. If you’ve read this far, you’re probably already familiar with these, but the upshot is that parsimony compels us to question that any such thing as subjective experience could be plausible in a mechanical system. Causal closure is seen not only to prohibit metaphysics, but also any chance of something like consciousness emerging through mechanical chain reactions alone.

Church-Turing works in the opposite way here, since all mechanisms can be reduced to computation and all computation can be reduced to arithmetic steps, there is no way to justify extra-arithmetic levels of description. If we say that the brain boils down to assembly language type transactions, then we need a completely superfluous and unsupportable injection of brute emergence to inflate computation to phenomenal awareness.

The symbol grounding problem shows how symbols can be manipulated ‘apathetically’ to an arbitrary degree of sophistication. The passing of the Turing test is meaningless ultimately since it depends on a subjective appraisal of a distant subjectivity. There isn’t any logical reason why a computer program to simulate a brain or human communication would not be a ‘zombie’, relying on purely quantitative-syntactic manipulations rather than empathetic investment. Since we ourselves can pretend to care, without really caring, we can deduce that there may be no way to separate out a public-facing effect from a private-facing affect. We can lie and pretend and say words that we don’t mean, so we cannot naively assume that just because we build a mouth which parrots speech that meaning will spontaneously arise in the mouth, or the speech, or the ‘system’ as a whole.

In the end, I think that we can’t have it both ways. Either we say that consciousness is intrinsic and irreducible, or we admit that it makes no sense as a product of unconscious mechanisms.

The question of whether the internet could come to life is, to me, only different from the question of whether Pinocchio could become a real boy in that there is a difference in degree. Pinocchio is a three dimensional puppet which is animated through a fourth dimension of time. The puppeteer would add a fifth dimension to that animation, lending their own conscious symbol-grounding to the puppet’s body intentionally. The puppet has no awareness of its own. What is different about an AI is that it would take the fifth dimensional control in-house as it were.

It gets very tricky here, since our human experience has always been with other beings that are self-directed to be living beings which are conscious or aware to some extent. We have no precedent in our evolution to relate to a synthetic entity which is designed explicitly to simulate the responses of a living creature. So far, what we have seen does not support, in my opinion, any fundamental progress. Pinocchio has many voices and outfits now, but he is still wooden. The uncanny valley effect gives us a glimpse in how we are intuitively and aesthetically repulsed by that which pretends to be alive. At this point, my conclusion is that we have nothing to fear from technology developing its own consciousness, no more than we have of books beginning to write their own stories. There is, however, a danger of humans abdicating their responsibility to AI systems, and thereby endangering the quality of human life. Putting ‘unpersons’ in charge of the affairs of real people may have dire consequences over time.

“There is no information without representation”

February 15, 2014 2 comments

My rebuttal to this from  New Empiricism

Information is one of the most poorly defined terms in philosophy but it is a well defined concept in physical theory. How can it be that a clear idea in one branch of knowledge can be murky in another?

The physical meaning of information is succinctly summarised in the Wikibook on “Consciousness Studies”:

“The number of distinguishable states that a system can possess is the amount of information that can be encoded by the system.”

In most cases a “state of a system” boils down to arrangements of objects, either material objects laid out in the world or sequences of objects such as the succession of signals in a telephone line. So information is represented by physical things laid out in space and time. There is no information without this representation as an arrangement of physical objects.

Information can be processed by machines. As an example, computers use the “distinguishable states” of charge in electrical components to perform a host of useful tasks. They use the state of electrical charge in electronic components because charge can be manipulated rapidly and can be impressed on tiny components, however, computers could use the states of steel balls in boxes or carrots flowing on conveyor belts to achieve the same effect, albeit more slowly. There is nothing special about electronic computers beyond their speed, complexity and compactness. They are just machines that contain three dimensional arrangements of matter.

Philosophers use information in a much less well-defined fashion. Philosophical information is far more fuzzy and involves the quality of things such as hardness or blueness. So how does philosophical blueness differ from a physical information state?

Physical information about the world is a generalised state change that is related to particular events in the world and could be impressed on any substrate such as steel balls etc.. This allows information to be transmitted from place to place. As an example, a heat sensor in England could trigger a switch that opens a trapdoor that drops a ball that is monitored on a camera that causes changes in charge patterns in a computer that are transmitted as sounds on a radio in the USA. If the sound on the radio makes a cat jump and knock over a vase then it is probably valid to look at the vase and say “its hot in England”. So physical information is related to its source by the causal chain of preceding steps. Notice that each of these steps is a physical event so there is no information without representation as a state in the real world.

In the philosophical idea of information “hot” or “cold” are particular states in the mind. Our mental states are not uniquely related to the state of the world outside our bodies. As an example, human heat sensors are fickle so a blindfolded person might contain the state called “cold” when their hand is placed in water at 60 degrees or ice water at zero degrees. Our “cold” is subjective and does not have a fixed reference point in the world. Our own information is a particular state that could be induced by a variety of events in the world whereas physical information can be a variety of states triggered by a particular event in the world.

To summarise, information in physics is a state change in any substrate. It can be related to the state change in another substrate if a causal chain exists between the two substrates. Information in the mind is the state of the particular substrate that forms your particular mind.

Your mind is a state of a particular substrate but a “state” is an arrangement of events. The crucial questions for the scientist are “what events?” and “how many independent directions can be used for arranging these events?”. We can tell from our experience that at least four independent axes (or “dimensions”) are involved.

Note

The fact that there is no information without representation of the information as a physical state means that peculiar non-physical claims such as Cartesian Dualism and Dennett’s “logical space” are not credible.

Daniel C Dennett. (1991). Consciousness Explained. Little, Brown & Co. USA. Available as a Penguin Book.

Dennett says: “So we do have a way of making sense of the idea of phenomenal space – as a logical space. This is a space into which or in which nothing is literally projected; its properties are simply constituted by the beliefs of the (heterophenomenological) subject.” Dennett is wrong because if the space contains information then it must be instantiated as a physical entity, if it is not instantiated then it does not exist and Dennett is simply denying the experience that we all share to avoid explaining it. Either we have simultaneous events or are just a single point, if we have simultaneous events the space of our experience exists.

“So information is represented by physical things laid out in space and time.”

Why would physical things ‘represent’ anything though? Without some sensory interpretation that groups such things together so that they appear “laid out in space and time”, who is to say that there could be any ‘informing’ going on?

“computers use the “distinguishable states” of charge in electrical components to perform a host of useful tasks.”

Useful to whom? The beads of an abacus can be manipulated into states which are distinguishable by the user, but there is no reason to assume that this informs the beads, or the physical material that the beads are made of. Computers do not compute to serve their own sense or motives, they are blind, low level reflectors of extrinsically introduced conditions.

“Your mind is a state of a particular substrate but a “state” is an arrangement of events. ”

States and arrangements are not physical because they require a mode of interpretation which is qualitative and aesthetic. Just as there can be no disembodied information, there can be no ‘states’ or ‘arrangements’ which are disentangled from the totality of sensible relations, and from specific participatory subsets therein. Information is a ghost – an impostor which reflects this totality in a narrow quantitative sense which is eternal but metaphysical, and a physical sense which is tangible and present but in which all aesthetic qualities are reduced to a one dimensional schema of coordinate permutation. Neither information nor physics can relate to each other or represent anything by themselves. It is my view that we should flip the entire assumption of forms and functions as primitively real around, so that they are instead derived from a more fundamental capacity to appreciate sensory affects and participate in motivated effects. The primordial character of the universe can only be, in my view metaphenomenal, with physics, information, and subjectivity as sensible partitions of the whole.

Why PIP (and MSR) Solves the Hard Problem of Consciousness

September 16, 2013 7 comments

The Hard Problem of consciousness asks why there is a gap between our explanation of matter, or biology, or neurology, and our experience in the first place. What is it there which even suggests to us that there should be a gap, and why should there be a such thing as experience to stand apart from the functions of that which we can explain.

Materialism only miniaturizes the gap and relies on a machina ex deus (intentionally reversed deus ex machina) of ‘complexity’ to save the day. An interesting question would be, why does dualism seem to be easier to overlook when we are imagining the body of a neuron, or a collection of molecules? I submit that it is because miniaturization and complexity challenge the limitations of our cognitive ability, we find it easy to conflate that sort of quantitative incomprehensibility with the other incomprehensibility being considered, namely aesthetic* awareness. What consciousness does with phenomena which pertain to a distantly scaled perceptual frame is to under-signify it. It becomes less important, less real, less worthy of attention.

Idealism only fictionalizes the gap. I argue that idealism makes more sense on its face than materialism for addressing the Hard Problem, since material would have no plausible excuse for becoming aware or being entitled to access an unacknowledged a priori possibility of awareness. Idealism however, fails at commanding the respect of a sophisticated perspective since it relies on naive denial of objectivity. Why so many molecules? Why so many terrible and tragic experiences? Why so much enduring of suffering and injustice? The thought of an afterlife is too seductive of a way to wish this all away. The concept of maya, that the world is a veil of illusion is too facile to satisfy our scientific curiosity.

Dualism multiplies the gap. Acknowledging the gap is a good first step, but without a bridge, the gap is diagonalized and stuck in infinite regress. In order for experience to connect in some way with physics, some kind of homunculus is invoked, some third force or function interceding on behalf of the two incommensurable substances. The third force requires a fourth and fifth force on either side, and so forth, as in a Zeno paradox. Each homunculus has its own Explanatory Gap.

Dual Aspect Monism retreats from the gap. The concept of material and experience being two aspects of a continuous whole is the best one so far – getting very close. The only problem is that it does not explain what this monism is, or where the aspects come from. It rightfully honors the importance of opposites and duality, but it does not question what they actually are. Laws? Information?

Panpsychism toys with the gap.Depending on what kind of panpsychism is employed, it can miniaturize, multiply, or retreat from the gap. At least it is committing to closing the gap in a way which does not take human exceptionalism for granted, but it still does not attempt to integrate qualia itself with quanta in a detailed way. Tononi’s IIT might be an exception in that it is detailed, but only from the quantitative end. The hard problem, which involves justifying the reason for integrated information being associated with a private ‘experience’ is still only picked at from a distance.

Primordial Identity Pansensitivity, my candidate for nomination, uses a different approach than the above. PIP solves the hard problem by putting the entire universe inside the gap. Consciousness is the Explanatory Gap. Naturally, it follows serendipitously that consciousness is also itself explanatory. The role of consciousness is to make plain – to bring into aesthetic evidence that which can be made evident. How is that different from what physics does? What does the universe do other than generate aesthetic textures and narrative fragments? It is not awareness which must fit into our physics or our science, our religion or philosophy, it is the totality of eternity which must gain meaning and evidence through sensory presentation.

 

*Is awareness ‘aesthetic’? That we call a substance which causes the loss of consciousness a general anesthetic might be a serendipitous clue. If so, the term local anesthetic as an agent which deadens sensation is another hint about our intuitive correlation between discrete sensations and overall capacity to be ‘awake’. Between sensations (I would call sub-private) and personal awareness (privacy) would be a spectrum of nested channels of awareness.

 

Why Computers Can’t Lie and Don’t Know Your Name

September 12, 2013 12 comments

What do the Hangman Paradox, Epimenides Paradox, and the Chinese Room Argument have in common?

The underlying Symbol Grounding Problem common to all three is that from a purely quantitative perspective, a logical truth can only satisfy some explicitly defined condition. The expectation of truth itself being implicitly true, (i.e. that it is possible to doubt what is given) is not a condition of truth, it is a boundary condition beyond truth*. All computer malfunctions, we presume, are due to problems with the physical substrate, or the programmer’s code, and not incompetence or malice.  The computer, its program, or binary logic in general cannot be blamed for trying to mislead anyone. Computation, therefore, has no truth quality, no expectation of validity or discernment between technical accuracy and the accuracy of its technique. The whole of logic is contained within the assumption that logic is valid automatically. It is an inverted mirror image of naive realism. Where a person can be childish in their truth evaluation, overextending their private world into the public domain, a computer is robotic in its truth evaluation, undersignifying privacy until it is altogether absent.

Because computers can only report a local fact (the position of a switch or token), they cannot lie intentionally. Lying involves extending a local fiction to be taken as a remote fact. When we lie, we know what a computer cannot guess – that information may not be ‘real’.

When we say that a computer makes an error, it is only because of a malfunction on the physical or programmatic level, therefore it is not false, but a true representation of the problem in the system which we receive as an error. It is only incorrect in some sense that is not local to the machine, but rather local to the user, who makes the mistake of believing that the output of the program is supposed to be grounded in their expectations for its function. It is the user who is mistaken.

It is for this same reason that computers cannot intend to tell the truth either. Telling the truth depends on an understanding of the possibility of fiction and the power to intentionally choose the extent to which the truth is revealed. The symbolic communication expressed is grounded strongly in the privacy of the subject as well as the public context, and only weakly grounded in the logic represented by the symbolic abstraction. With a computer, the hierarchy is inverted. A Turing Machine is independent of private intention and public physics, so it is grounded absolutely in its own simulacra. In Searle’s (much despised) Chinese Room Argument – the conceit of the decomposed translator exposes how the output of a program is only known to the program in its own narrow sensibility. The result of the mechanism is simply a true report of a local process of the machine which has no implicit connection to any presented truths beyond the machine…except for one: Arithmetic truth.

Arithmetic truth is not local to the machine, but it is local to all machines and all experiences of correct logical thought. This is an interesting symmetry, as the logic of mechanism is both absolutely local and instantaneous and absolutely universal and eternal, but nothing in between. Every computed result is unique to the particular instantiation of the machine or program, and universal as a Turing emulable template. What digital analogs are not is true or real any sense which relates expressly to real, experienced events in space time. This is the insight expressed in Korzybski’s famous maxim ‘The map is not the territory.’ and in the Use-Mention distinction, where using a word intentionally is understood to be distinct from merely mentioning the word as an object to be discussed. For a computer, there is no map-territory distinction. It’s all one invisible, intangible mapitory of disconnected digital events.

By contrast, a person has many ways to voluntarily discern territories and maps. They can be grouped together, such as when the acoustic territory of sound is mapped to the emotional-lyric territory of music, or the optical territory of light is mapped as the visual territory of color and image. They can be flipped so that the physics is mapped to the phenomenal as well, which is how we control the voluntary muscles of our body. For us, authenticity is important. We would rather win the lottery than just have a dream that we won the lottery. A computer does not know the difference. The dream and the reality are identical information.

Realism, then, is characterized by its opposition to the quantitative. Instead of being pegged to the polar austerity which is autonomous local + explicitly universal, consciousness ripens into the tropical fecundity of middle range. Physically real experience is in direct contrast to digital abstraction. It is semi-unique, semi-private, semi-spatiotemporal, semi-local, semi-specific, semi-universal. Arithmetic truth lacks any non-functional qualities, so that using arithmetic to falsify functionalism is inherently tautological. It is like asking an armless man to raise his hand if he thinks he has no arms.

Here’s some background stuff that relates:

The Hangman Paradox has been described as follows:

A judge tells a condemned prisoner that he will be hanged at noon on one weekday in the following week but that the execution will be a surprise to the prisoner. He will not know the day of the hanging until the executioner knocks on his cell door at noon that day.Having reflected on his sentence, the prisoner draws the conclusion that he will escape from the hanging. His reasoning is in several parts. He begins by concluding that the “surprise hanging” can’t be on Friday, as if he hasn’t been hanged by Thursday, there is only one day left – and so it won’t be a surprise if he’s hanged on Friday. Since the judge’s sentence stipulated that the hanging would be a surprise to him, he concludes it cannot occur on Friday.He then reasons that the surprise hanging cannot be on Thursday either, because Friday has already been eliminated and if he hasn’t been hanged by Wednesday night, the hanging must occur on Thursday, making a Thursday hanging not a surprise either. By similar reasoning he concludes that the hanging can also not occur on Wednesday, Tuesday or Monday. Joyfully he retires to his cell confident that the hanging will not occur at all.The next week, the executioner knocks on the prisoner’s door at noon on Wednesday — which, despite all the above, was an utter surprise to him. Everything the judge said came true.

Some thoughts on this:

1) The conclusion “I won’t be surprised to be hanged Friday if I am not hanged by Thursday” creates another proposition to be surprised about. By leaving the condition of ‘surprise’ open ended, it could include being surprised that the judge lied, or any number of other soft contingencies that could render an ‘unexpected’ outcome. The condition of expectation isn’t an objective phenomenon, it is a subjective inference. Objectively, there is no surprise since objects don’t anticipate anything.

2) If we want to close in tightly on the quantitative logic of whether deducibility can be deduced – given five coin flips and a certainty that one will be heads, each successive tails coin flip increases the odds that one the remaining flips will be heads. The fifth coin will either be 100% likely to be heads, or will prove that the certainty assumed was 100% wrong.

I think the paradox hinges on 1) the false inference of objectivity in the use of the word surprise and 2) the false assertion of omniscience by the judge. It’s like an Escher drawing. In real life, surprise cannot be predicted with certainty and the quality of unexpectedness it is not an objective thing, just as expectation is not an objective thing.

Connecting the dots, expectation, intention, realism, and truth are all rooted in the firmament of sensory-motive participation. To care about what happens cannot be divorced from our causally efficacious role in changing it. It’s not just a matter of being petulant or selfish. The ontological possibility of ‘caring’ requires letters that are not in the alphabet of determinism and computation. It is computation which acts as punctuation, spelling, and grammar, but not language itself. To a computer, every word or name is as generic as a number. They can store the string of characters that belong to what we call a name, but they have no way to really recognize who that name belongs to.

*I maintain that what is beyond truth is sense: direct phenomenological participation

Notes on Privacy

June 16, 2013 8 comments

The debates on privacy which have been circulating since the dawn of the internet age tend to focus either on the immutable rights of private companies to control their intellectual property or the obsolescence of the notion of actual people to control access to their personal information. There’s an interesting hypocrisy there, as the former rights are represented as pillars of civilized society and the latter expectations are represented as quaint but irrelevant luxuries of a bygone era.

This double standard aside, the issue of privacy itself is never discussed. What is it, and how do we explain its existence within the framework of science? To me, the term privacy as applied to physics is more useful in some ways than consciousness. When we talk about private information being leaked or made public, we really mean that the information can now be accessed by unintended private parties. There is really no scientific support for the idea of a truly ‘public’ perspective ontologically. All information exists only within some interpreter’s sensory input and information processing capacity. While few would argue that there is no universe beyond our human experience of it, who can say that there is no universe beyond *any* experience of it? Just because I don’t see it doesn’t mean it doesn’t look like something, but if there nothing can see it can we really say that it looks like something?

Privacy would be more of a problem for theoretical physics than it is for internet users, if physicists were to try to explain it. It is through the problems which have risen with the advent of widespread computation that we can glimpse the fundamental issue with our worldview and with our legacy understanding of its physics. With identity theft, pirated software, appropriated endorsements, data mining, and now Prism, it should be obvious that technology is exposing something about privacy itself which was not an issue before.

The physics of privacy that I propose suggests that by making our experiences public through a persistent medium, we are trading one kind of entropy for another. When we express an aspect of our private life into a public network, the soft, warm blur of inner sense is exposed to the cold, hard structure of outer knowledge. It is an act which is thermodynamically irreversible – a fact which politicians seem slow to understand as the cover-up of the act seems invariably the easier transgression to discover and prove. The cover up alerts us to the initial crime as well as a suggestion of the knowledge of guilt, and the criminal intent to conceal that guilt. The same thing undoubtedly occurs on a personal level as subjects which are most threatening to people’s marriages and careers are probably those which can be found by searching for purging behavior and keywords related to embarrassment.

As the high-entropy fuzziness of inner life is frozen into the low-entropy public record, a new kind of entropy over who can access this record is introduced. Security issues stem from the same source as both IP law issues and surveillance issues. The ability to remain anonymous, to expose anonymity, to spoof identifiers leading to identification, etc, are all examples of the shadow of private entropy cast into the public realm. There’s no getting around it. Identity simply cannot be pinned down 100% – that kind of personal entropy can only be silenced personally. Only we know for sure that we are ourselves, and that certainty, that primordial negentropy is the only absolute which we can directly experience. Decartes cogito is a personal statement of that absolute certainty (Je pense donc je suis), although I would say that he was too narrow in identifying thought in particular as the essence of subjectivity. Indeed, thinking is not something that we notice until we are a few years old, and it can be backgrounded into our awareness through a variety of techniques. I would say instead that it is the sense of privacy which is the absolute: solace, solitude, solipsism – the sense of being apart from all that can be felt, seen, known, and done. There is a sense of a figurative ‘place’ in which ‘we’ are which is separate and untouchable to anything public.

This sense seems to be corroborated by neuroscience as well, since no instrument of public discovery seems to be able to find this place. I don’t see this as anything religious or mystical (though religion and mysticism does seek to explain this sense more than science has), but rather as evidence that our understanding of physics is incomplete until we can account for privacy. Privacy should be understood as something which is as real as energy or matter, in fact, it should be understood as that which divides the two and discerns the difference. Attention to reveal, intention to reveal or conceal, and the oscillation between the three is at the heart of all identity, from human beings to molecules. The control of uncertainty, through camouflage, pretending, and outright deception has been an issue in biology almost from the start. Before biology, concealment seems limited to unintentional circumstances of placement and obstruction, although that could be a limitation of our perception as well. Since what we can see of another’s privacy may not ever be what it appears, it stands to reason that our own privacy may not ever be able to play the role of impartial public observer. Privacy is made of bias, and that bias is the relativistic warping of perception itself.

Privacy and Social Media

Continuing with the idea of information entropy as it relates to privacy, social media acts as a laboratory for these kinds of issues. Before Facebook, the notion of friendship floated on a cushion of consensual entropy – politeness. As the song goes “don’t ask me what I think of you, I might not give the answer that you want me to.”. Whom one considered a friend was largely a subjective matter with high public entropy. Even when declaring friendship openly, there was no binding agreement and it was effortless for sociable people to retain many asymmetric relations. Politeness has always been part of the security apparatus of those who are powerful or popular. Nobility and politeness have a curious relation, as the well heeled are expected to embody exemplary breeding but also have license to employ rudeness and blunt honesty at will. The haughtiness of high position is one of reserving one’s own right to expose others faults while being protected from others ability to do the same.

Facebook, while not the first social network to employ a structure of friendship granting, has made the most out of it. From the start, the agenda of Facebook has been to neutralize the power of politeness and to encourage public declaration of friendship as a binding, binary statement – yes you are my friend or (no response). Unfriending someone is a political act which can have real implications. Even failing to respond to someone’s friend request can have social currency. The result is a tacit bias toward liberal friending policies, and a consequent need for filtering to control who are treated as friends and who are treated as potential friends, tolerated acquaintances, frienemies, etc. Google Plus offers a more explicit system for managing this non-consensual social entropy to more conveniently permit social asymmetry.

Twitter has wound up playing an unusual role in which privacy of elites is protected in one sense and exposed in another. Unlike other social networks, The 140 character limit on tweets, which came from the desire to make it compatible with SMS, has the unintentional consequence of providing a very fast stream with low investment of attention. For a celebrity who wants to retain their popularity and relevance, it is an ideal way to keep in touch with large numbers of fans without the expectation of social involvement that is implied by a richer communication system. It gives back some of the latitude which Facebook takes away – you don’t have friends on Twitter, you have Followers. It is not considered as much of a slight not to follow someone back, and it is not considered as a threat to follow someone that you don’t know. In a way, Twitter makes controlled stalking acceptable, just as Facebook makes being nosy about someone’s friends acceptable.

Hacktivist as hero, villain, genius, and clown.

There is more than enough that has been written on the subject of the changing attitudes toward technoverts (geeks, nerds, dorks, dweebs, et. al.) over the last three decades, but the most contentious figure to come out of the computer era has been the one who is skilled at wielding the power to reveal and conceal. Early on, in movies like Wargames and The Net, there was a sense of support for the individual underdog against the impersonal machine. Even R2D2 in the original Star Wars played David to the Death Star’s Goliath computer while connecting to it secretly. The tide began to turn it seems, in the wake of Napster, which unleashed a worldwide celebration of music sharing, to the horror of those who had previously enjoyed a monopoly over the distribution of music. Since then, names like Anonymous, Assange, and Snowden have aroused increasingly polarized feelings.

The counter-narrative of the hacker as villain, although always present within the political and financial power structures as a matter of protection, has become a new kind of arch-enemy in the eyes of many. It is very delicate territory to get into for the media. Journalism, like nobility, floats on a layer of politeness. To continue to be able to reveal some things, it must conceal its sources. The presence of an Assange or Snowden presents a complicated issue. If they friend the hacker/whistleblower/whistleblower-enabler, then become linked to their authority-challenging values, but if they villify them, then they indict their own methods and undermine their own moral authority and David vs Goliath reputation.

Of course, it’s not just the issues of whistleblowing in general but the specific character of the whistleblower and the organization they are exposing which are important. This is not a simple matter of legal principle since it really depends on who it is in society which we support as to whether the ability to access protected information is good, bad, lawful, chaotic, admirable, frivolous, etc. It all depends on whether the target of the breach is themselves good, bad, lawful, chaotic, etc. In America in particular we are of two minds about justice. We love the Dirty Harry style of vigilante justice on film, but in reality we would consider such acts to be terrorism. We like the idea of democracy in theory, but when it comes to actual exercises of freedom of speech and assembly in protest, we break out the tear gas and shake our heads at the naive idealists.

Twitter fits in here as well. It is as much the playground of celebrities to flirt with their audience as it is the authentic carrier of news beyond the control of the media. It too can be used for nefarious purposes. Individuals and groups can be tracked, disinformation and confusion can be spread. David and Goliath can both imitate each other, and the physics of privacy and publicity have given rise to a new kind of ammunition in a new kind of perpetual war.

I can't believe it!

Problems of today, Ideas for tomorrow

Rationalising The Universe

one post at a time

Conscience and Consciousness

Academic Philosophy for a General Audience

yhousenyc.wordpress.com/

Exploring the Origins and Nature of Awareness

DNA OF GOD

BRAINSTORM- An Evolving and propitious Synergy Mode~!

Musings and Thoughts on the Universe, Personal Development and Current Topics

This is a blog where I explore spiritual and personal development themes and ideas. © JAMES MICHAEL J. LOVELL, MUSINGS AND THOUGHTS ON THE UNIVERSE, PERSONAL DEVELOPMENT AND CURRENT TOPICS, 2016-2020, ALL RIGHTS RESERVED.

Paul's Bench

Ruminations on philosophy, psychology, life

This is not Yet-Another-Paradox, This is just How-Things-Really-Are...

For all dangerous minds, your own, or ours, but not the tv shows'... ... ... ... ... ... ... How to hack human consciousness, How to defend against human-hackers, and anything in between... ... ... ... ... ...this may be regarded as a sort of dialogue for peace and plenty for a hungry planet, with no one left behind, ever... ... ... ... please note: It may behoove you more to try to prove to yourselves how we may really be a time-traveler, than to try to disprove it... ... ... ... ... ... ...Enjoy!

Creativity✒📃😍✌

“Don’t try to be different. Just be Creative. To be creative is different enough.”

absolutephilosophy

An idealistic blog where those who are searching/wandering/questioning can find an absolute qualia.

zumpoems

Zumwalt Poems Online

The Traditionalist

Revolt Against The Modern World

dhamma footsteps

postcards from the present moment

chandleur

Bagatelle

OthmanMUT

Observational Tranquillity.

Gray Matters

Traversing the blood-brain-barrier.

Writings By Ender

The Writer's Adventure