My comment on Can we trust our senses?
My comment on Can we trust our senses? :
Orthomodular Panprimordialism
Playing around with a more math-friendly look and feel. Multisense Realism’s quant-flavored twin…Orthomodular Panprimordialism
I am really pushing it with the neologisms, but I am liking both of the recent adds, pansensitivity and now panprimordialism.
Pansensitivity is used to emphasize a position beyond panpsychism, idealism, and materialism where sensitivity becomes a palatable common capacity for all phenomena on its native scale.
Panprimordialism is used to emphasize the distribution of ‘one-ness’ across all phenomena, the relocating of all quantities to interrelated diffractions within the a sole, absolute singularity. This constitutes a figure-ground pivot from arithmetic assumptions which place zero or null as an absolute, such that all null values are considered a hypothetical representation of the sense of one’s self nullification.
The orthomodular lattice above gives some idea of that relation, with each node acting as the nexus for orderly juxtapositions within the overall monad.
Philosophical Gender

Like sexual gender, the psyche tends to favor hovering around one aesthetic preference at a time. So much of philosophy seems to be rooted in just that, the aesthetic preferences of the psyche. How else should we explain why we are so often personally attached to our philosophical views – why we are in fact attracted to them, and to writers and speakers who have espoused similar perspectives.
Many are traditional in their philosophical tastes, and find that even the thought of experimenting with other views makes them very uncomfortable. Others find it natural to consume philosophy of all sorts, the more the better, but at the same time they may favor one particular flavor, or they may get sick of the whole intellectual-masturbatory scene eventually.
Philosophy engenders a feeling of firm orientation within it, despite the many other options available which might directly contradict it. That’s sort of the hook. A particular way of looking at things makes you feel that you are on the right track, maybe for the first time. It can change the way that you feel about other ways of acting and thinking. Like hitting puberty, what was once merely charged with social naughtiness and furtive mystery becomes irrepressibly intense. Childish ways of behaving, especially those which cross gender or leave it undeveloped are often discarded in shame and become repulsive, at least publicly. Gender is suddenly unexpectedly prominent, and exaggerated to the point of caricature.

Philosophy is almost inevitably tied to politics. Views on what the universe is, though seemingly esoteric and remote, have a way of filtering into our attitudes about everything from nature and technology, to society, personal responsibility, money, possessions, art, drugs, literature, etc. Early math is practically inseparable from philosophy.
There are many polarities and nested polarities within philosophy, especially philosophy of mind. I often focus on reducing those polarities (reductive vs non-reductive…there’s another polarity) to a single hetero-normative gender which I am lately calling Anthropmorphism and Mechanemorphism, but have also referred to it as ACME and OMMM, Oriental Animism and Western Mechanism, Public-entropic and Private-holotrophic, and for those with a symbol fetish, ((-ℵ↔Ω) ↓ ºt) and (ωª ↑ (H←d)).
In the course of studying this swirl of gender, it became apparent that the swirl itself could be transcended philosophically. While the battle between mind-firsters and body-firsters rages on forever, the battle itself can be seen as their most powerful overlap. Somehow even in the antiquated writings of long dead thinkers (well, the thinkers who were deemed white enough and male enough to be published anyways), fresh controversy can be sparked. It’s remarkable, really. The enduring conflict, a perpetually circulating difference of opinion on everything, the difference between differences, and different ways of defining difference, and defining definition.
Is philosophy is a strange attractor?
“The Lorenz attractor is an example of a strange attractor. Strange attractors are unique from other phase-space attractors in that one does not know exactly where on the attractor the system will be. Two points on the attractor that are near each other at one time will be arbitrarily far apart at later times. The only restriction is that the state of system remain on the attractor. Strange attractors are also unique in that they never close on themselves — the motion of the system never repeats (non-periodic). The motion we are describing on these strange attractors is what we mean by chaotic behavior.”
If there were an ‘end’ to all of this (which is what exactly?) I think that it can be found not in being and nothingness or in difference and repetition, but in recognizing the commonality of conflict – the sense which discerns between differences and which also is the motivation which negates indifference. Carrying this principle into physics, and then mathematics, what it looks like is that a revolution of the most primordial propositions should be considered:
- The number ‘one’ should be reinvented and restored as the root of the number line.
- Zero should be regarded as neither a real or imaginary number, but rather an imaginary absence of all number.
- The Big Bang singularity should be reinterpreted to reflect this new understanding of 1 and 0, beginnings and endings.
Here is the main insight: Since the difference between a difference (1) and indifference (no difference = 0) is in fact a difference, the two concepts are not perfectly symmetrical negations of each other, but rather, indifference, like nonsense or disorder, is a qualifier of difference. Zero is just 1 minus itself. In a universe of just the concept of 1 and subtraction, 1 would have to reproduce itself once in order to have another one to subtract, and then reproduce itself once more in order to carry out the subtraction. One cannot disappear, and zero cannot generate any numbers or operations. They don’t cancel each other out, they nest within each other in strange loops.
In this way, as I have posted before, the Big Bang must never be considered an explosion in space at some particular moment in the past, but rather it is the frame of all events, and all spaces. We are within the Big Bang, which was not a 1 emerging from 0, not a Universe from Nothing, but the opposite. What I call the Big Diffraction is “The Universe Within Everything”. The whole of physics can be seen as pieces to the puzzle which is getting more piece-ier and peace-ier as it goes on. The whole of mathematics can be seen as taking place within the number one, transformed, non-Euclidean style into the Absolute set of all sets. One is not an object, it is a primordial language of experience, of sense and sense making – a singularity not only of quantity, but of ontological-psychophysical gender.
But wait. Sense is not just a matter of being and knowing, it is also a matter of sensing and thinking, of comparing. It does not resolve the Material and the Experiential as being ‘the same thing’, it resolves them as both being equal to the same thing (1) in the opposite way. The Big Bang is not just 1, it is more like “=1”. This is a more primordial opposite even than being and nothingness, since nothingness can only be imagined by something. The relation of = to 1 is as opposite of that of 1 and 0 but more subtle. Just look at the characters. Parallel horizontal lines compared with an arrow-like stroke of singular effort. I guess I’m getting too into this, but whatever, consider it a piece of Suprematist art. It’s a before and after, an open canal, and an erect figure. An invitation and an expression. There’s a whole philosophy lurking in there just in the shapes of arithmetic symbols. Hmm.
Hypostitious Minds
Once upon a time, the belief in witchcraft was about as common as the belief in using soap.
…how pervasive was belief in witchcraft in early modern England? Did most people find it necessary to use talismans to ward off the evil attacks of witches? Was witchcraft seen as a serious problem that needed to be addressed? The short answer is that belief in witchcraft survived well into the modern era, and both ecclesiastical and secular authorities saw it as an issue that needed to be addressed.
In 1486, the first significant treatise on witchcraft, evil, and bewitchment, Malleus Malificarum , appeared in continental Europe. A long 98 years later, the text was translated into English and quickly ran through numerous editions. It was the first time that religious and secular authorities admitted that magic, witchcraft, and superstition were, indeed, real; it was also a simple means of defining and identifying people who performed actions seen as anti-social or deviant.
Few doubted that witches existed, and none doubted that being a witch was a punishable offense. But through the early modern era, witchcraft was considered a normal, natural aspect of daily life, an easy way for people, especially the less educated, to events in the confusing world around them. – (source)
In many parts of the world, the belief in witchcraft is still very common.

(source)
“As might be expected, the older and less educated respondents reported higher belief in witchcraft, but interestingly such belief was inversely linked to happiness. Those who believe in witchcraft rated their lives significantly less satisfying than those who did not.
One likely explanation is that those who believe in witchcraft feel they have less control over their own lives. People who believe in witchcraft often feel victimized by supernatural forces, for example, attributing accidents or disease to evil sorcery instead of randomness or naturalistic causes.” (source)
another poll on beliefs in the U.S:
What People Do and Do Not Believe in
Many more people believe in miracles, angels, hell and the devil than in Darwin’s theory of evolution; almost a quarter of adults believe in witches
New York, N Y . — December 15 , 2009 —
A new Harris Poll finds that the great majority (82%) of American adults believe in God, exactly the same number as in two earlier Harris Polls in 2005 and 2007. Large majorit ies also believe in miracles (76 %), heaven (75%), that Jesus is God or the Son of God (73%) , in angels (72%), th e survival of the soul after death (71%), and in the resurrection of Jesus (70%). Less than half (45%) of adults believe in Darwin’s theory of evolution but this is more than the 40% who believe in creationism. These are some of the results of The Harris Poll of 2,303 adults surveyed online between November 2 and 11, 2009 by Harris Interactive . The survey also finds that: 61 % of adults believe in hell; 61% believe in the virgin birth (Jesus born of Mary); 60% believe in the devil; 42% believe in ghosts; 32% believe in UFOs; 26% believe in astrology; 23% believe in witches 20% believe in reincarnation (source)
Because there are so many benefits associated with freedom from superstition, it is not much of a tradeoff emotionally to go from a world of mystical phantoms to one of scientific clarity. It may be too intellectually challenging or difficult for a lot of people to get the opportunity to be exposed to scientific knowledge in the right way, at the right time in their life, but it seems that if they do seize that opportunity, they are happy with their decision. Of course, not everyone makes a decision to block out all religious or spiritual beliefs when they accept scientific truths, and even though there are probably more people alive today who believe in sorcery than there were in 1600, there are more people now who also believe in germs, powered flight, and heliocentric astronomy.
There is little argument that scientific knowledge and its use as a prophylactic against the rampant spread of superstition is a ‘good thing’. Is it possible though, to have too much of a good thing? Is there a limit to how much we should insist upon determinism and probability to explain everything?
The Over-Enlightenment
The City Dark is a recent documentary about the subtle and not-so-subtle effects of light pollution. Besides new and unacknowledged health dangers from changed sleeping habits in people and ecological upheaval in other species, the show makes the case that the inescapable blur of light which obscures our view of the night sky is quietly changing our view of our own lives. The quiet importance of the vast heavens in setting our expectations and limiting the scale of our ego has been increasingly dissolved into a haze of metal halide. In just over a century, night-time illumination has gone from a simple extension of visibility into the evening, into a 24 hour saturation coverage of uninhabited parking lots, residential neighborhoods, and office buildings. The connection between the power to see, do, and know, is embodied literally in our history as the Enlightenment, Industrial Age, and Information Age.
The 20th century was cusp of the Industrial and Information ages, beginning with Edison and Einstein redefining electricity, light, and energy, peaking with the midcentury Atomic age when radiation became a household word and microwave ovens began to cook with invisible light rather than heat. Television became an artificial light source which we used not only as silent companions with which to see the world, but as a kind of hypnotic signal emitter which we stare directly into – the home version of that earlier invention which came into its own in the 20th century, the motion picture. The century which tracked the spread of electricity and light from urban centers to the suburbs ended with the internet and mobile phones bringing CRT, LED, and LCD light into our personal space. Where once electronic devices were confined to living rooms and cars, we are now surrounded by tiny illuminated dots and numbers, and a satellite connection is hardly ever out of arms reach.
A Life Sentence
In Foucault’s Discipline and Punish, he details the history of prison and the rise of disciplinary culture in Europe as it spread from monasteries through the hospitals, military, police, schools, and industry. He discusses how the concept of justice evolved from the whim of the king to torture and publicly execute whoever he pleased, to kangaroo courts of simulated justice, to the modern expectation of impartiality and evidence in determining guilt.
The shift of punishment style from dismemberment to imprisonment reflected the change in focus from the body to the mind. The Reformation gave Western Europe a taste of irreverence and self-determination, at the same time, the monastic lifestyle was adopted throughout pre-Modernity. To be a hospital patient, student, soldier, prisoner, or factory laborer was to enter a world of strict regulation, immaculate uniforms, and constant inspection. Inspection is a central theme which Foucalt examines. He describes how an obsessive regimen of meticulous inspection and monitoring, and standardized testing reached an ultimate expression in the panopticon architecture. Through this central-eye floor plan, the population is exposed and personally vulnerable while the administration retains the option to remain concealed and anonymous.


Circumstantial Evidence
Tying these themes of inspection, enlightenment, and illumination together with witchcraft is the concept of evidence. What could be more scientific than evidence.
evident (adj.)
late 14c., from Old French evident and directly from Latin evidentem (nominative evidens) “perceptible, clear, obvious, apparent” from ex- “fully, out of” (see ex-) + videntem (nominative videns), present participle of videre “to see” (see vision)
The Salem Witch Trials famously victimized those who were targeted as witches by subjecting them to what seem to us now as ludicrous tests. This gives us a good picture of the transition from pre-scientific to scientific practices in society. This adolescent point between the two reveals a budding need to rationalize harsh punishments intellectually, but not enough to prevent childish impatience and blame from running the show.
The idea of circumstantial evidence – evidence which is only coincidentally related to a crime, marks a shift in thinking which is echoed in the rise of the scientific method. As the mindset of those in power became more modern, the validity of all forms of intuition and supernatural sources came into question. Where once witchcraft and spirits were taken seriously, now there was a radical correction. It was belief in the supernatural which was revealed to be obsolete and suspicious. The default position had changed from one which assumed spirits and omens to one which assumed coincidence, exaggeration, and mistaken impressions. Beyond even the notion of innocent until proven guilty, it was the notion that proof mattered in the first place which was the Enlightenment’s gift to the cause of human liberation.
Few would argue that this new dis-belief system which brought us out of savagery is a good thing, but also, as Foucault intimates, we cannot assume that it is all good. Is incarceration really the human and effective way of discouraging crime that we would like to think, or is it a largely hypocritical enactment of a fetish for control? Does the desire to predict and control lead to an insatiable desire to dictate and invade others?
Cold Readings
There have been many exposes on psychics and mediums over the years where stage magicians and others have run down the kinds of tricks that can be used to gather unexpected intelligence from an audience and use it to fool them. The cold reading is a way of cheating a mark into thinking that the psychic has supernatural powers, when in fact they have had an assistant look through their purse earlier.
Ironically, these techniques are the same techniques used in science, except that they are intended to reveal the truth rather than instigate a fraud. Statistical analyses and reductive elimination are key aspects of the scientific method, giving illumination to hidden processes. In neuroscience, for instance, an fMRI is not really telling us about how a person thinks or feels, rather they physiological changes that we can measure are used to produce a kind of cold reading of the subject’s experience, based on our own familiarity with our personal experience.
This is all fantastic stuff, of course, but there seems to be a point where the methods of logical inference from evidence crosses over into its own kind of pathology. The etymology of superstition talks about “prophecy, soothsaying, excessive fear of the gods”. The suffix ‘-stition’ is from the same root as ‘-standing’ in understanding. There is a sense of the mind compulsively over-reaching for explanations, jumping to conclusions, and rendered stupid by naivete.
The converse pathology does not have a popular name like that, although people use the word pseudoskeptical to emphasize a passionately prejudiced attitude toward the unproved rather than a scientifically impartial stance. The neologism I am using here, hypostition, puts the emphasis on the technical malfunction of the scientific impulse run amok. Where superstition is naive, hypostition is cynical. Where superstition jumps to conclusions, hypostition resists any conclusion, no matter how clear and compelling, in which the expectations of the status quo are called into question.
Tests in Life
Much of what is meant by witchcraft can be boiled down to an effort to access secret knowledge and power. The witch uses divination to receive guidance and prophecy intuitively, often by studying patterns of coincidence and invoking a private intention to find its way to a public expression. Superstition swims in the same waters, reading into coincidence and projecting their own furtive impulses outwardly. Beyond that, we talking about herbal medicine, folk psychology, and rituals mythologizing nature.
The goal of the science and technology is similarly an effort to extract knowledge and power from nature, but to do so without falling into the trap of magical thinking. Instead of making a pact with occult forces, the scientist openly experiments to expose nature. Along the way, there are often lucky coincidences which lead to breakthroughs, and challenges which seem tailor made to derail the work. These trials and tribulations, however, are not supported by science. If we adopt the hypostitious frame of mind, there can be no narrative to our experience, no fortunate people, places, or times, beyond the allowable margins of chance.
We have come full circle on coincidence, where we obliged to doubt even the most life-altering synchronicity as mere statistical inevitability.
In place of superstition we have neuroses. Our triumph over the fear of the unknown has become an insidious phobia of the known. Even to recognize this would be to admit some kind of narrative pattern in human history. Recognition of such a pattern is discouraged. The tests which we face are not allowed to make that kind of sense, unless it can be justified by the presence of a chemical in the body, or a behavior in another species.
A Way Out
For me, the recognition of the two poles of superstition and hypostition are enough to realize that the way forward is to avoid the extremes most of the time. Intuition and engineering both have their place, and the key is not to always try to squeeze one into the other. At this point, the world seems to be nightmarishly extreme in both directions at the same time, but maybe it has always seemed that way?
The challenge I suppose is to try to find a way to escape each other’s insanity, or to contribute in some way toward improving what we can’t escape from. With some effort and luck, our fear of the dark and insensitivity to the light might be transformed into a full range of perception. Nah, probably not.
Strong AI Position
It may not be possible to imitate a human mind computationally, because awareness may be driven by aesthetic qualities rather than mathematical logic alone. The problem, which I call the Presentation Problem, is what several outstanding issues in science and philosophy have in common, namely the Explanatory Gap, the Hard Problem, the Symbol Grounding problem, the Binding problem, and the symmetries of mind-body dualism. Underlying all of these is the map-territory distinction; the need to recognize the difference between presentation and representation.
Because human minds are unusual phenomena in that they are presentations which specialize in representation, they have a blind spot when it comes to examining themselves. The mind is blind to the non-representational. It does not see that it feels, and does not know how it sees. Since its thinking is engineered to strip out most direct sensory presentation in favor of abstract sense-making representations, it fails to grasp the role of presence and aesthetics in what it does. It tends toward overconfidence in the theoretical.The mind takes worldly realism for granted on one hand, but conflates it with its own experiences as a logic processor on the other. It’s a case of the fallacy of the instrument, where the mind’s hammer of symbolism sees symbolic nails everywhere it looks. Through this intellectual filter, the notion of disembodied algorithms which somehow generate subjective experiences and objective bodies, (even though experiences or bodies would serve no plausible function for purely mathematical entities) becomes an almost unavoidably seductive solution.
So appealing is this quantitative underpinning for the Western mind’s cosmology, that many people (especially Strong AI enthusiasts) find it easy to ignore that the character of mathematics and computation reflect precisely the opposite qualities from those which characterize consciousness. To act like a machine, robot, or automaton, is not merely an alternative personal lifestyle, it is the common style of all unpersons and all that is evacuated of feeling. Mathematics is inherently amoral, unreal, and intractably self-interested – a windowless universality of representation.
A computer has no aesthetic preference. It makes no difference to a program whether its output is displayed on a monitor with millions of colors, or buzzing out of speaker, or streaming as electronic pulses over a wire. This is the primary utility of computation. This is why digital is not locked into physical constraints of location. Since programs don’t deal with aesthetics, we can only use the program to format values in such a way that corresponds with the expectations of our sense organs. That format of course, is alien and arbitrary to the program. It is semantically ungrounded data, fictional variables.
Something like the Mandelbrot set may look profoundly appealing to us when it is presented optically as plotted as colorful graphics, but the same data set has no interesting qualities when played as audio tones. The program generating the data has no desire to see it realized in one form or another, no curiosity to see it as pixels or voxels. The program is absolutely content with a purely quantitative functionality – with algorithms that correspond to nothing except themselves.
In order for the generic values of a program to be interpreted experientially, they must first be re-enacted through controllable physical functions. It must be perfectly clear that this re-enactment is not a ‘translation’ or a ‘porting’ of data to a machine, rather it is more like a theatrical adaptation from a script. The program works because the physical mechanisms have been carefully selected and manufactured to match the specifications of the program. The program itself is utterly impotent as far as manifesting itself in any physical or experiential way. The program is a menu, not a meal. Physics provides the restaurant and food, subjectivity provides the patrons, chef, and hunger. It is the physical interactions which are interpreted by the user of the machine, and it is the user alone who cares what it looks like, sounds like, tastes like etc. An algorithm can comment on what is defined as being liked, but it cannot like anything itself, nor can it understand what anything is like.
If I’m right, all natural phenomena have a public-facing mechanistic range and a private-facing animistic range. An algorithm bridges the gap between public-facing, space-time extended mechanisms, but it has no access to the private-facing aesthetic experiences which vary from subject to subject. By definition, an algorithm represents a process generically, but how that process is interpreted is inherently proprietary.
Determinism: Tricks of the Trade
The objection that the terms ‘consciousness’ or ‘free will’ are used in too many different ways to be understandable is one of the most common arguments that I run into. I agree that it is a superficially valid objection, but on deeper consideration, it should be clear that it is a specious and ideologically driven detour.
The term free will is not as precise as a more scientific term might be (I tend to use motive, efferent participation, or private intention), but it isn’t nearly the problem that it is made to be in a debate. Any eight year old knows well enough what free will refers to. Nobody on Earth can fail to understand the difference between doing something by accident and intentionally, or between enslavement and freedom. The claim that these concepts are somehow esoteric doesn’t wash, unless you already have an expectation of a kind of verbal-logical supremacy in which nothing is allowed to exist until we can agree on a precise set of terms which give it existence. I think that this expectation is not a neutral or innocuous position, but actually contaminates the debate over free will, stacking the deck unintentionally in favor of the determinism.
It’s subtle, but ontologically, it is a bit like letting a burglar talk you into opening up the door to the house for them since breaking a window would only make a mess for you to clean up. Because the argument for hard determinism begins with an assumption that impartiality and objectivity are inherently desirable in all things, it asks that you put your king in check from the start. The argument doubles down on this leverage with the implication that subjective intuition is notoriously naive and flawed, so that not putting your king in check from the start is framed as a weak position. This is the James Randi kind of double-bind. If you don’t submit to his rules, then you are already guilty of fraud, and part of his rules is that you have no say in what his rules will be.
This is the sleight of hand which is also used by Daniel Dennett as well. What poses as a fair consideration of hard determinism is actually a stealth maneuver to create determinism – to demand that the subject submit to the forced disbelief system and become complicit in undermining their own authority. The irony is that it is only through a personal/social, political attack on subjectivity that the false perspective of objectivity can be introduced. It is accepted only by presentation pf an argument of personal insignificance so that the subject is shamed and bullied into imagining itself an object. Without knowing it, one person’s will has been voluntarily overpowered and confounded by another person’s free will into accepting that this state of affairs is not really happening. In presenting free will and consciousness as a kind of stage magic, the materialist magician performs a meta-magic trick on the audience.
Some questions for determinist thinkers:
- Can we effectively doubt that we have free will?
Or is the doubt a mental abstraction which denies the very capacity for intentional reasoning upon which the doubt itself is based? - How would an illusion of doubt be justified, either randomly or deterministically? What function would an illusion of doubt serve, even in the most blue-sky hypothetical way?
- Why wouldn’t determinism itself be just as much of an illusion as free will or doubt under determinism?
Another common derailment is to conflate the position of recognizing the phenomenon of subjectivity as authentic with religious faith, naive realism, or soft-headed sentimentality. This also is ironic, as it is an attack on the ego of the subject, not on the legitimacy of the issue. There is no reason to presume any theistic belief is implied just because determinism can be challenged at its root rather than on technicalities.
If consciousness cannot be explained by algorithms, then, by default, would you have to rely on a supernatural explanation?
Answer by Lev Lafayette:
No.
The following is copy-paste, but dammit, I wrote it in the first place and it's (exhaustively) on-topic.
tl;dr. The foundation of consciousness is having shared symbolic values. Whilst consciousness cannot be reduced to physical phenomena, this doesn't mean it doesn't arise from physical phenomena in a social setting (supervenience).
Mary, the Swampy Philosophical Zombie, Is In Your Chinese Room! Problems With Reductionist Theories of Consciousness
(I intend to use this title for a journal article on the subject)
1.0 What Is Consciousness?
"Consciousness is a fascinating but elusive phenomenon. Nothing worth reading has been written about it."
Stuart Sutherland in the 1989 International Dictionary of Psychology1.1 Consciousness is variously defined along with sentience (from the Latin “to feel”) and sapience (Latin “to know”, or “to be wise”). "Consciousness" derives from Latin conscientia which primarily means moral conscience (knowledge-with, shared knowledge). Descartes was the first to use it in the sense of the individual ego, but which was expanded by Locke to include moral responsibility. Consciousness is typically described in terms of phenomenological subjectivity; awareness, a sense of self, which is also applied in contemporary medicine as a continuum (from being fully alert and cognisant to being disorientated, to delerious, to being unconscious and unresponsive). The historical definition suggested social co-knowledge (con- "together" + scire "to know") suggesting moral reasoning (conscientia, conscience) and language. This original use is still applied in law with the concept of legal responsibility with consciousness.
1.2 Consciousness is distinguished by Ned Block between “phenomenal consciousness” (P-consciousness) of pure experience, sounds, emotions etc., and “access consciouness” (A-awareness) of introspection, memory etc. The exploration of consciouness as experience and memory is in the philosophical school and psychology of phenomenology. There is also a theoretical distinction between the "easy problem of consciousness", such as functional responses, perceptual discrimination etc, and the the "hard problems of consciousness" (qualia, such as colours, tastes). The hard problem is answering why physical processing gives rise to an inner life at all (Chalmers, 1995).
1.3 The philosophical concept of consciousness has been criticised from sources as varied as Marx, Nietzsche and Foucault. Marx considered that social relations preceeded consciousness (“It is not the consciousness of men that determines their existence, but their social existence that determines their consciousness.”), whereas Nietzsche reversed the conception of free will and moral action ("they give you free will only to later blame yourself"). Some philosophers, elimintavist physical monists, deny the extistence of consciousness at all.
1.4 There is a strong tie between consciousness and language (in its broadest sense). Medical and legal opinion both agree that assessments of consciousness must include the capacity to engage in communication. A concept of 'self' that is beyond the instinctual is only formulated through language and culture with the handful of 'feral children' (e.g., the Genie experience) serving as evidence. Descartes also argued that the lack of language in animals indicated a lack of lack of access to res cogitans, the realm of thought (although many animals have since been shown to engage in fairly sophisticated communication).
"The limits of my language mean the limits of my world.", Ludwig Wittgenstein, Tractatus Logico-Philosophicus (1922), Section 5.62.0 Models of Consciousness
2.1 There are two broad models of consciousness as they relate to the mind-body problem (and related subjects such as materialism and idealism). Monism argues that the mind and the body are the same; dualism argues that they are separate. Within monism is there is essentially two types; idealistic monism and physicalist monism; they have a perhaps surprisingly degree of similarity.
2.2 The former, particularly common in some branches of religion, consider that all is consciousness. A particularly strong example is that of Bishop George Berkeley, who argued for "empirical idealism", who argued (effectively) that the universe, and all that is experienced, is a figment of God's imagination. This is very similar to the Hindu notion of Brahman ("non-dual pure consciousness, indivisible, incorporeal, infinite, and all-pervading"), but distinct from the Buddhist dharmic samara/nirvana dichotomy.
2.3 Physicalist monism argues that there is no distinct mental states from physical brain and nervous-system states. Eliminativism, for example, argues that like astrology and alchemy eventually eliminated false folkloric notions from the sciences of astronomy and chemistry, so to will the mental states of everyday discourse (e.g., intent, belief, desire, love, pain) will be shown to be false, as will the study of psychology. The proposition is argued by philosophers such as Wilfrid Sellars, Richard Rorty, Paul and Patricia Churchland and Daniel Dennett. Some eliminativists, such as Frank Jackson, claim that consciousness does not exist except as an epiphenomenon of brain function; others, such as Georges Rey, claim that the concept will eventually be eliminated as neuroscience progresses.
2.4 A related form of physical monism is reductive materialism, also known "Type Physicalism", argues that that mental events can be grouped into types, and can then be correlated with types of physical events in the brain. Its origins are with the psychologist Edwin Boring (The Physical Dimensions of Consciousness, 1933) and further developed by Ullin Place, Herbert Feigl, and Jack Smart. One conflict that arises in type physicalism is the possibility of the type-token distinction. If type physicalism is true, then mental state M1 would be identical to brain state B1. However token-physicalism, such as argued by Hilary Putnam and Jerry Fodor, argues for "multiple realisability"; the same mental state can be produced from many different physical brain states. Experiments with colour recognistion seem to support multiple realisability.
3.0 Chinese Rooms, Mary The Scientist, Swamp-men and Philosophical Zombies
3.1 Reductive monism, whether physical or ideal, can be challenged by four related thought experiments; the Chinese Room by John Searle (1980), Frank Jackson (1982), Swampman (1987) by Donald Davidson, and Philosophical Zombies by David Chalmers (1996). All three are examples of arguments that emphasise not just the importance of subjectivity and qualia, but also introduce issues relating to understanding, meaning, and language.
3.2 The Chinese Room article is specifically presented as an argument against artificial consciousness, however it is particularly important as a challenge to physicalist monism which, like the AI advocates of computationalism, argue that the mind is a information processing system operating on symbols (c.f., Alan Turing Test). "The Chinese Room" takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Whilst it (being the system, or the individual processing the data) can carry on a conversation, at no point does it understand the characters. As Searle (1999) argues: "The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese".
3.3 One strong response to the Chinese Room argument concedes the example, but challenges it with a robot that has extended sensory system and therefore attaches semantic correlations from sensory input to symbols. The "Robot Reply" has been endorsed at different points by Margaret Boden, Daniel Dennett, Jerry Fodor, Hans Moravec et al. For Searle, this is just additional input, and whilst it may strengthen the rule-based system, still doesn't provide understanding. Tim Crane eventually ties this criticism with social interaction, something which Searle neglected to make sufficiently explicit – and therefore was prone to accusations that he was begging the question; "… if Searle had not just memorized the rules and the data, but also started acting in the world of Chinese people, then it is plausible that he would before too long come to realize what these symbols mean" (The Mechanical Mind, 1996)
3.4 Frank Jackson's "Mary's Room" is a thought experiment that is aimed against physicalism in particular. Mary is a scientist of the neurophysiology of vision, but who has done all their work in a black-and-white environment. She knows all about the physical properties of colour, wavelengths, the effects on the retina and brain etc., but has never experienced color. Once she experiences color, does she learn anything new? If she does, then not all knowledge is physical knowledge; Mary has learned about qualia; subjective, qualitative properties of experiences. Arguably, Mary has gained an acquantaince to facts or abilities that she already had.
3.5 In the Swampman argument Donald Davidson explores the mind of a replicant: "Suppose Davidson goes hiking in the swamp and is struck and killed by a lightning bolt. At the same time, nearby in the swamp another lightning bolt spontaneously rearranges a bunch of molecules such that, entirely by coincidence, they take on exactly the same form that Davidson's body had at the moment of his untimely death." Swampman would make noises that his friends and family would interpret as language, but, according to teleological theories (and Davidson's own theory of content) Swampman has no ideas about philosophy, no perceptions of his surroundings and no beliefs or desires about anything at all; Swampman has no intentional states.
3.6 A variant on Swampman is Philosophical Zombies by David Chalmers (although it dates back to Robert Kirk, 1974). It is a being that is physically indistinguishable from a normal human being, even down to neurological effects, however it lacks conscious experience and qualia; "all is silent and dark within" (Iris Murdoch). The strength of the philosophical zombie argument is that (a) we recognise that people have periods without consciousness and (b) that human behaviour to occur automatically (e.g., sleepwalking). Elaborating these to the extreme, philosophical zombies are logically conceivable (even if practically improbable).
3.7 According to the physicalists, everything – including consciousness – is reducible to pure physicality. If physicalism is true, then conscious experience must exist in such all such possible worlds that contain the same physical facts as our world (P). However it is possible to conceive of a possible world (Q) where there is no consciousness (where p-zombies exist), but the physical facts are the same. Therefore, physicalism is false. This is known as modus tollens in predicate logic. If P then Q; not Q, therefore not P.
4.0 Never Mind, It Doesn't Matter; Escaping Monist Reductionism and Avoiding Substance Dualism
4.1 In contrast with monist approaches (whether physicalist or idealist), there are a variety of dualist approaches, which argues that the mind and the brain are ontologically separate categories. There are three basic types; substance dualism, property dualism, and predicate dualism.
4.2 Substance dualism is the argument that the mind and brain are different substances. Originating with Descartes (and famously resulting in "Cartesian dualism" of res cogitans versus res extensa) it argues that the mental universe is not extensible into space, and the material cannot think. It is considered compatible with most theological perspectives that distinguish in substance between the body and the mental "soul". Whilst extremely influential in the history of the mind-body problem, substance dualism is not considered a popular theory due to numerous problems that show that the brain and mind at least have high levels of correlation (e.g., the mental effects of brain damage is indisputable e.g., Phineas Gage).
4.3 Property dualism argues that whilst there is but one type of substance (physical or ideal), but two types of properties that result; physical and mental; non-physical, mental properties (such as beliefs, desires and emotions) correlate in some physical substances (the brain), but are not reducible to these. Examples include emergentism or supervenience, with advocates ranging as far as John Stuart Mill and Jaegwon Kim. In this model mental states emerge or depend on brain states, but cannot be reduced to brain states as they have different properties. Emergence and supervenience is notably popular among biological scientists (e.g. Philip Kitcher, Elliot Sober, Alexander Rosenberg). Property dualist can be further split into epiphenomenalism and interactionism. In the former, physical states can give rise to mental states, but not the reverse. Interactionism, claims that mental states can produce material effects (and vice-versa).
4.5 Predicate dualism, advocated by Donald Davidson and Jerry Fodor, argue that while there is only one ontological category of substances and properties of substances, the predicates that we use to describe mental events cannot be reduced physical predicates of natural languages. Another description of predicate dualism, used by Donald Davidson, is "anomalous monism" (Mental Events, 1970). The theory states that (a) mental events are identical with physical events, (b) the mental events are anomalous, and are not regulated by strict physical laws. Davidson also developed the notion of supervenience to answer critics that noted that this wasn't really a form of physicalism.
4.6 A final argument against reductive physicalism is that its pragmatically impossible. Physicalism can only provides statements of facticity; quantity and spatio-temporal location. It cannot provide any information on moral norms or aesthetic expressions. Although physical facts, moral norms, and aesthetic expressions all depend, or emerge, from empirical foundations, any one approach cannot provide answer to the others; they are pragmatically incommensurable; David Dennet's Consciousness Explained, 1991, is particularly noteworthy for this category error. It is as risible to argue that morals and aesthetics are reducible to facts as it is to suggest that aesthetics or facts are reducible to moral norms etc.


Recent Comments