Archive
I Think Therefore I Am?
The only thing that can be verified 100% to exist is your own consciousness (“I think, therefore I am”) does this effect/change your own beliefs in any way and how so?
In a way it is true that our consciousness is the only thing that we can verify 100%, however, that way of looking at it may itself not be 100% verifiable. Since cognition is only one aspect of our consciousness, we don’t know if the way that ‘our’ consciousness seems to that part of ‘us’ is truly limited to personal experience or whether it is only the tip of the iceberg of consciousness.
The nature of consciousness may be such that it supplies a sense of limitation and personhood which is itself permeable under different states of consciousness. We may be able to use our consciousness to verify conditions beyond its own self-represented limits, and to do so without knowing how we are able to do it. If we imagine that our consciousness when we are awake is like one finger on a hand, there may be other ‘fingers’ parallel to our own which we might call our intuition or subconscious mind. All of the fingers could have different ways of relating to each other as separate pieces while at the same time all being part of the same ‘hand’ (or hand > arm >body).
With this in mind, Descartes’ cogito “I think therefore I am” could be re-phrased in the negative to some extent. The thought that it is only “I” who is thinking may not be quite true, and all of our thoughts may be pieces to a larger puzzle which the “I” cannot recognize ordinarily. It still cannot be denied that there is a thought, or an experience of thinking, but it is not as undeniable that we are the “I” that “we” think we are.
The modern world view is, in many ways, the legacy of Cartesian doubt. Descartes has gotten a bad rap, ironically due in part to the success of his opening the door to purely materialistic science. Now, after 400 years of transforming the world with technology, it seems prehistoric to many to think in terms of a separate realm of thoughts which is not physical. Descartes does not have the opportunity to defend himself, so his view is an easy target – a straw man even. When we update the information that Descartes had, however, we might see that Cartesian skepticism can still be effective.
Some things which Descartes didn’t have to draw upon in constructing his view include:
1) Quantum Mechanics – QM shifted microphysics from a corpuscular model of atoms to one of quantitative abstractions. Philosophically, quantum theory is ambiguous in both its realism/anti-realism and nominalism/anti-nominalism. Realism starts from the assumption that there are things which exist independently of our awareness of them, while nominalism considers abstract entities to be unreal.
- Because quantum theory is the base of our physics, and physics precedes our biology, quantum mechanics can be thought of as a realist view. Nature existed long before human consciousness did, and nature is composed of quantum functions. Quantum goes on within us and without us.
- Because quantum has been interpreted as being at least partially dependent on acts of detection (e.g. “Experiment confirms quantum theory weirdness”), it can be considered an anti-realist view. Unlike classical objects, quantum phenomena are subject to states like entanglement and superposition, making them more like sensory events than projectiles. Many physicists have emphatically stated that the fabric of the universe is intrinsically participatory rather than strictly ‘real’.
- Quantum theory is nominalist in the sense that it removes the expectation of purpose or meaning in arithmetic. “Shut up and calculate.” is a phrase* which illustrates the nominalist aspects of QM to me; the view is that it doesn’t matter whether these abstract entities are real or not, just so long as they work.
- Quantum theory is anti-nominalist because it shares the Platonic view of a world which is made up of perfect essences – phenomena which are ideal rather than grossly material. The quantum realm is one which can be considered closer to Kant’s ‘noumena’ – the unexperienced truth behind all phenomenal experience. The twist in our modern view is that our fundamental abstractions have become anti-teleogical. Because quantum theory relies on probability to make up the world, instead of a soul as a ghost in the material machine, we have a machine of ghostly appearances without any ghost.
To some, these characteristics when taken together seem contradictory or incomprehensible…mindless mind-stuff or matterless matter. To others, the philosophical content of QM is irrelevant or merely counter-intuitive. What matters is that it makes accurate predictions, which makes makes it a pragmatic, empirical view of nature.
2) Information Theory and Computers
The advent of information processing would have given Descartes something to think about. Being neither mind nor matter, or both, the concept of ‘information’ is often considered a third substance or ‘neutral monism’. Is information real though, or is it the mind treating itself like matter?
Hardware/software relation
This metaphor gets used so often that it is now a cliche, but the underlying analogy has some truth. Hardware exists independently of all software, but the same software can be used to manipulate many different kinds of hardware. We could say that software is merely our use of hardware functions, or we could say that hardware is just nature’s software. Either way there is still no connection to sensory participation. Neither hardware nor software has any plausible support for qualia.
Absent qualia
Information, by virtue of its universality, has no sensory qualities or conscious intentions. It makes no difference whether a program is executed on an electronic computer or a mechanical computer of gears and springs, or a room full of people doing math with pencil and paper. Information reduces all descriptions of forms and functions to interchangeable bits, so the same information processes would have to be the same regardless of whether there were any emergent qualities associated with them. There is no place in math for emergent properties which are not mathematical. Instead of a ‘res cogitans’ grounded in mental experience, information theory amounts to a ‘res machina’…a realm of abstract causes and effects which is both unextended and uninhabited.
The receding horizon of strong AI
If Descartes were around today, he might notice that computer systems which have been developed to work like minds lack the aesthetic qualities of natural people. They make bizarre mistakes in communication which remind us that there is nobody there to understand or care about what is being communicated. Even though there have been improvements in the sophistication of ‘intelligent’ programs, we still seem to be no closer to producing a program which feels anything. To the contrary, when we engage with AI systems or even CGI games, there is an uncanny quality which indicates a sterile and unnatural emptiness.
Incompleteness, fractals, and entropy
Gödel’s incompleteness theorem formalized a paradox which underlies all formal systems – that there are always true statements which cannot be proved within that system. This introduces a kind of nominalism into logic – a reason to doubt that logical propositions can be complete and whole entities. Douglas Hofstadter wrote about strange loops as a possible source of consciousness, citing complexity of self-reference as a key to the self. Fractal mathematics were used to graphically illustrate some aspects of self-similarity or self-reference and some, like Wai H Tsang have proposed that the brain is a fractal.
The work of Turing, Boltzmann, and Shannon treat information in an anti-nominalist way. Abstract data units are considered to be real, with potentially measurable effects in physics via statistical mechanics and through the concept of entropy. The ‘It from Bit’ view described by Wheeler is an immaterialist view that might be summed up as “It computes, therefore it is.”
3) Simulation Triumphalism
Disneyland
When Walt Disney produced full length animated features, he employed the techniques of fine art realism to bring completely simulated worlds to life in movie theaters. For the first time, audiences experienced immersive fantasy which featured no ‘real’ actors or sets. Disney later extended his imaginary worlds across the Cartesian divide to become “real” places, physical parks which are constructed around imaginary themes, turning the tables on realism. In Disneyland, nature is made artificial and artifice is made natural. Audioanimatronic robots populate indoor ‘dark rides’ where time can seem to stop at midnight even in the middle of a Summer day.
Video games
The next step in the development of simulacra culture took us beyond Hollywood theatrics and naturalistic fantasy. Arcade games featured simulated environments which were graphically minimalist. The simulation was freed from having to be grounded in the real world at all and players could identify with avatars that were little more than a group of pixels.
Video, holographic, and VR technologies have set the stage for acceptance of two previously far-fetched possibilities. The first possibility is that of building artificial worlds which are constructed of nothing but electronically rendered data. The second possibility is that the natural world is itself such an illusion or simulation. This echoes Eastern philosophical views of the world as illusion (maya) as well as being a self-reflexive pattern (Jeweled Net of Indra). Both of these are suggested by the title of the movie The Matrix, which asks whether being able to control someone’s experience of the world means that they can be controlled completely.
The Eastern and Western religious concepts overlap in their view of the world as a Matrix-like deception against a backdrop of eternal life. The Eastern view identifies self-awareness as the way to control our experience and transcend illusion, while the Abrahamic religions promise that remaining devoted to the principles laid down by God will reveal the true kingdom in the afterlife. The ancients saw the world as unreal because the true reality can only be God or universal consciousness. In modern simulation theories, everything is unreal except for the logic of the programs which are running to generate it all.
4) Relativity
Einstein’s Theory of Relativity went a long way toward mending the Cartesian split by showing how the description of the world changes depending upon the frame of reference. Previously fixed notions of space, time, mass, and energy were replaced by dynamic interactions between perspectives. The straight, uniform axes of x,y,z, and t were traded for a ‘reference-mollusk’ with new constants, such as the spacetime interval and the speed of light (c). The familiar constants of Newtonian mechanics, and Cartesian coordinates were warped and animated against a tenseless, Non-Euclidean space with no preferred frame of reference.
Even before quantum mechanics introduced a universe built on participation, Relativity had punched a hole in the ”view from nowhere’ sense of objectivity which had been at the heart of the scientific method since the 17th century. Now the universe required us to pick a point within spacetime and a context of physical states to determine the appearance of ‘objective’ conditions. Descartes extended substance had become transparent in some sense, mimicking the plasticity and multiplicity of the subjective ‘thinking substance’.
5) Neuroscience
Descartes would have been interested to know that his hypothesis of the seat of consciousness being the pineal gland had been disproved. People have had their pineal glands surgically removed without losing consciousness or becoming zombies. The advent of MRI technology and other imaging also has given us a view of the brain as having no central place which acts as a miniature version of ourselves. There’s no homunculus in a theater looking out on a complete image stored within the brain. There is also no hint of dualism in the brain as far as a separation between how and where fantasy is processed. To the contrary, all of our waking experiences seamlessly fuse internal expectations with external stimuli.
Neuroscience has conclusively shattered our naive realism about how much control we have over our own mind. Benjamin Libet’s showed that by the time we think that we are making a decision, prior brain activity could be used to predict what the decision would be. With perceptual tests we have shown that our experience of the real world not only contains glaring blind spots and distortions but that those distortions are masked from our direct inspection. Perception is incomplete, however that is no reason to conclude that it is an illusion. We still cannot doubt the fact of perception, only that in a complex kind of perception that a human being has, there are opportunities for conflicts between levels.
Neuroscientific knowledge has also opened up new appreciation for the mystery of consciousness. Some doctors have studied Near Death Experiences and Reincarnation reports. Others have talked about their own experiences in terms which suggest a more mystical presence of universal consciousness than we have imagined. Slowly the old certainties about consciousness in medicine are being challenged.
6) Psychology
Psychology has developed a model of mental illness which is natural rather than supernatural. Conditions such as schizophrenia and even depression are diagnosed and treated as neurological disorders. The use of brain-change drugs, both medically and recreationally has given us new insights into the specificity of brain function. Modern psychology has questioned earlier ideas such as Freud’s Id, Ego, and Superego, and the monolithic “I” before that so that there are many neurochemical roles and systems which contribute to making “us”.
To Decartes’ Cogito, the contemporary psychologist might ask whether the I refers to the sense of an inner voice who is verbalizing the statement, or to the sense of identification with the meaning of the concept behind the words, etc.
In all of the excitement of mapping mental symptoms to brain states, some of the most interesting work in psychology have languished. William James, Carl Jung, Piaget, and others presented models of the psyche which were more sympathetic to views of consciousness as a continuum or spectrum of conscious states. By shifting the focus away from first hand accounts and toward medical observation, some have criticized the neuroscientific influence on psychology as a pseudoscience like phrenology. The most important part of the psyche is overlooked, and patients are reduced to sets of correctable symptoms.
7) Semiotics
Perhaps the most underappreciated contribution on this list is that of semioticians such as C.S. Peirce and de Saussure. Before electronic computing was even imagined, they had begun to formalize ideas about the relation between signs and what is signified. Instead of a substance dualism of mind and matter, semiotic theories introduced triadic formulations such as between signs, objects, and concepts.
Baudrillard wrote about levels of simulation or simulacra, in which a basic reality is first altered or degraded, then that alteration is masked, then finally separated from any reality whatsoever. Together, these notions of semiotic triads and levels of simulation can help guide us away from the insolubility of substance dualism. Reality can be understood as a signifying medium which spans mind-like media and matter-like media. Sense and sense-making can be reconciled without inverting it as disconnected ‘information’.
8) Positivism & Post-Modernism
The certainty which Descartes expressed as a thinker of thoughts can be seen to dissolve when considered in the light of 20th century critics. Heavily criticized by some, philosophers such as Wittgenstein, Derrida, and Rorty continue to be relevant to undermining the incorrigibility of consciousness. The Cogito can be deconstructed linguistically until it is meaningless or nothing but the product of the bias of language or culture. Under Wittgenstein’s Tractatus, the Cogito can be seen as a failure of philosophy’s purpose in clarifying facts, thereby deflating it to an empty affirmation of the unknowable. Since, in his words “Whereof one cannot speak, thereof one must be silent.” we may be compelled to eliminate it altogether.
What logical positivism and deconstructivism does with language to our idea of consciousness is like what neuroscience does through medicine; it demands that we question even the most basic identities and undermines our confidence in the impartiality of our thoughts. In a sense, it is an invitation for a cross-examination of ourselves as our own prosecution witness.
Wilfrid Sellars attack on the Myth of the Given sees statements such as the Cogito as forcing us to accept a contradiction where sense-datum (such as “I think”) are accepted as a priori facts, but justified beliefs (“therefore I am”) have to be acquired. How can consciousness be ‘given’ if understanding is not? This would seem to point to consciousness as a process rather than a state or property. This however, fails to account for lower levels of consciousness which might be responsible for even the micro level processing.
In my view , logic and language based arguments against the incorrigibility fail because they overlook their own false ‘given’, which is that symbols can literally signify reality. In fact, symbols have no authority or power to provide meaning, but instead act as a record for those who intend to preserve or communicate meaning.
An updated Cogito
“I think, therefore I am at least what a thinker thinks is a thinker.”
Rather than seeing Cartesian doubt as only a primitive beginning to science, I think it makes sense to try to pick up where he left off. By adding the puzzle pieces which have been acquired since then, we might find new respect for the approach. Relativism itself may be relative, so that we need not be compelled to deconstruct everything. We can consider that our sense of deconstruction and solipsism as absurd may be well founded, and that just because our personal intuition is often flawed does not mean that kneejerk counter-intuition is any better.
With that in mind, is the existence of the “I” really any more dubious than a quark or a rainbow? Does it serve us to insist upon rigid designations of ‘real’ vs ‘illusion’ in a universe which has demonstrated that its reality is more like illusion? At the same time, does it serve us to deny that all experiences are in some sense ‘real’, regardless of their being ineffable to us now?
*attributed to David Mermin, Richard Feynman, or Paul Dirac (depending on who you ask)
Consciousness can be mindless, but Mind cannot be unconscious
The mind is the cognitive range of consciousness. Consciousness includes many more aesthetic forms than just mind.
Semiotics: What are the implications of the Saussurian sign (signifier/signified) for a theory of meaning?
In my theory of meaning, Saussurian concepts of signifier and signified are a good start, but I propose a fundamental change. In his answer, Keith Allpress offers:
here is where I think we stand:
Shannon removed content from meaning but using bits.
Saussure claimed that language creates meaning.
and points out the limitations of post-modern/relativistic/deconstructionist approaches. I would say that the computationalist approach is similarly limited, in that there is no compelling reason that ‘it from bit’ should apply to all aspects of meaning. I think that what is missing from these two approaches is the same thing, only seen from opposite sides. To understand more about that thing, we can begin by asking:
“What cares about the difference?”
I think where Saussure and modern semiotics in general went too far is in presuming representation without presentation. The error of the computationalist view is even more subtle, as it presumes presentation as an emergent property, thereby taking it outside of the realm of science, but without admitting it. To me, this is a very seductive but misguided approach which leads directly to the Emperor’s emergent clothes.
Taking the term ‘signifier’, we can crack the kernel of truth that semiotics-as-cosmology is based on. Just as it is not incorrect to call someone who is driving a car a ‘driver’, neither is driver a complete description of the role of human beings in the world. What is missing? What *cares* that something is missing? What fills the gap is what I call aesthetic participation, or sensory-motive presence. In my view, before ‘information’ (a difference that makes a difference per Bateson) or sign, there must be the raw sensitivity to detect and interpret such ‘differences’ or ‘signs’ and to *care* about those differences. What we have done, by reifing pattern as objectively real things which are recognized, or de-realizing things as subjectively constructed patterns is to void the existence of sense and sense-making itself.
Not to get too cheeky, but what I propose is that beneath Bateson’s adage is a deeper context from which information and signs emerge: an aesthetic phenomenon which likes its own likeness by making its own differences. I call this primordial pansensitivity, or ‘sense’ and the particular quality of appreciation that it cares about I call ‘significance’. Significance cannot be automated, it must be earned directly through intimate acquaintance. It may sound like I am talking about human intimacy here, but I mean nothing of the sort. By acquaintance (stealing that word from Chalmers), I mean sensory-motive encounters on a fundamental level: before humans, before biology, and before even matter. The universe has to make sense before anything can make sense of it.
The aesthetic agenda is purely hedonistic. It is to develop ever richer textures and modalities of appreciation. While the universe is replete with repeating patterns, it never seems to repeat its particular, proprietary holons. A whirlpool, hurricane, and galaxy all share the same unmistakable topology, but nobody would mistake one for the other. Not just the scale but everything that constitutes their appearance and role in the universe is different. In calling the universe signs or bits we are losing the appreciation and proprietary character. The unique and worthwhile becomes generic and inevitable. It ultimately is to make meaning meaningless.
Names (representations) can be related to each other in ways that nature (presentations) cannot be. The equal sign is itself a name for one of these relations. In nature nothing can be absolutely equal to anything else. All of nature is unrepeatably unique in a literal sense, but will seem to be made of repetition and variation from any particular perspective within it. In this way, the postmodernists are right. We have only the presence of our own ability to feel that can be known absolutely as it is. Everything else that exists for us, within our individually customized experience has some degree of approximation/representation.
What makes this even more complicated and confusing is that there are different levels of sense-making whcih can contradict each other. We would like to think of signs as simply a case of dictionary definitions were signs literally signifiy what we expect they should signify. Even the identity principle of A = A is subject to a deeper degree of expectation about what A and = mean in different contexts. We can look at a surreal painting and say ‘that is a painting of something impossible’, but it is only our expectation that the paint shapes refer to something other than themselves which is being misled. What surrealism signifies is not ‘real’, but neither is it nothing.
Where the computationalists are right is in seeing the uniformity of arithmetic principles across all phenomena which can be measured. Reducing all transactions to bits obviously has been tremendously transformative in this century. By banishing the aesthetic qualities (qualia) to an emergent never-never land, however, we have been seduced by the representation of measure (quanta). Simulation-type theories now abound, in which the entire history of human experience (including the development of science, but shh…) is marginalized as a confabulation/illusion/model and the only true reality one which can never been contacted in any way except through theoretical abstraction. We either live in an unreal world, or the world which we now think is real is not the one that we actually live in. We are being asked to believe that meaning is meaningless and that the only alternative to solipsism is a kind of ‘nilipsism’* in which even our ennui is yet another meaningless function of the program.
To turn the page on this era of de-presentation**, I suggest that we look at the roots of semiotics more deeply, and recognize that signs themselves depend upon a deeper context of sensation and sense-making which goes beyond even physics or human experience.
*a word I made up to describe the philosophy that the self (ipse) must be reduced to a non-entity.
**another neologism that I use to refer to what Raymond Tallis calls the ‘Disappearance of Appearance’…the overlooking of the phenomenon of aesthetic presence itself.
Wittgenstein, Physics, and Free Will
JE: My experience from talking to philosophers is that WIttgenstein’s view is certainly contentious. There seem to be two camps. There are those seduced by his writing who accept his account and there are others who, like me, feel that Wittgenstein expressed certain fairly trivial insights about perception and language that most people should have worked out for themselves and then proceeded to draw inappropriate conclusions and screw up the progress of contemporary philosophy for fifty years. This latter would be the standard view amongst philosophers working on biological problems in language as far as I can see.
Wittgenstein is right to say that words have different meanings in different situations – that should be obvious. He is right to say that contemporary philosophers waste their time using words inappropriately – any one from outside sees that straight away. But his solution – to say that the meaning of words is just how they are normally used, is no solution – it turns out to be a smoke screen to allow him to indulge his own prejudices and not engage in productive explanation of how language actually works inside brains.
The problem is a weaseling going on that, as I indicated before, leads to Wittgenstein encouraging the very crime he thought he was clever to identify. The meaning of a word may ‘lie in how it is used’ in the sense that the occurrences of words in talk is functionally connected to the roles words play in internal brain processes and relate to other brain processes but this is trivial. To say that meaning is use is, as I said, clearly a route to the W crime itself. If I ask how do you know meaning means use you will reply that a famous philosopher said so. Maybe he did but he also said that words do not have unique meanings defined by philosophers – they are used in all sorts of ways and there are all sorts of meanings of meaning that are not ‘use’, as anyone who has read Grice or Chomsky will have come to realise. Two meanings of a word may be incompatible yet it may be well nigh impossible to detect this from use – the situation I think we have here. The incompatibility only becomes clear if we rigorously explore what these meanings are. Wittgenstein is about as much help as a label on a packet of pills that says ‘to be taken as directed’.
But let’s be Wittgensteinian and play a language game of ordinary use, based on the family resemblance thesis. What does choose mean? One meaning might be to raise in the hearer the thought of having a sense of choosing. So a referent of ‘choose’ is an idea or experience that seems to be real and I think must be. But we were discussing what we think that sense of choosing relates to in terms of physics. We want to use ‘choose’ to indicate some sort of causal relation or an aspect of causation, or if we are a bit worried about physics still having causes we could frame it in terms of dynamics or maybe even just connections in a spacetime manifold. If Wheeler thinks choice is relevant to physics he must think that ‘choose’ can be used to describe something of this sort, as well as the sense of choosing.
So, as I indicated, we need to pin down what that dynamic role might be. And I identified the fact that the common presumption about this is wrong. It is commonly thought that choosing is being in a situation with several possible outcomes. However, we have no reason to think that. The brain may well not be purely deterministic in operation. Quantum indeterminacy may amplify up to the level of significant indeterminacy in such a complex system with so powerful amplification systems at work. However, this is far from established and anyway it would have nothing to do with our idea of choosing if it was just a level of random noise. So I think we should probably work on the basis that the brain is in fact as tightly deterministic as matters here. This implies that in the situation where we feel we are choosing THERE IS ONLY ONE POSSIBLE OUTCOME.
The problem, as I indicated is that there seem to be multiple possible outcomes to us because we do not know how are brain is going to respond. Because this lack of knowledge is a standard feature of our experience our idea of ‘a situation’ is better thought of as ‘an example of an ensemble of situations that are indistinguishable in terms of outcome’. If I say when I get to the main road I can turn right or left I am really saying that I predict an instance of an ensemble of situations which are indistinguishable in terms of whether I go right or left. This ensemble issue of course is central to QM and maybe we should not be so surprised about that – operationally we live in a world of ensembles, not of specific situations.
So this has nothing to do with ‘metaphysical connotations’ which is Wittgenstein’s way of blocking out any arguments that upset him – where did we bring metaphysics in here? We have two meanings of choose. 1. Being in a situation that may be reported as being one of feeling one has choice (to be purely behaviourist) and 2. A dynamic account of that situation that turns out not to agree with what 99.9% of the population assume it is when they feel they are choosing. People use choose in a discussion of dynamics as if it meant what it feels like in 1 but the reality is that this use is useless. It is a bit like making burnt offerings to the Gods. That may be a use for goats but not a very productive one. It turns out that the ‘family resemblance’ is a fake. Cousin Susan who has pitched up to claim her inheritance is an impostor. That is why I say that although to ‘feel I am choosing’ is unproblematic the word ‘choice’ has no useful meaning in physics. It is based on the same sort of error as thinking a wavefunction describes a ‘particle’ rather than an ensemble of particles. The problem with Wittgenstein is that he never thought through where his idea of use takes you if you take a careful scientific approach. Basically I think he was lazy. The common reason why philosophers get tied in knots with words is this one – that a word has several meanings that do not in fact have the ‘family relations’ we assume they have – this is true for knowledge, perceiving, self, mind, consciousness – all the big words in this field. Wittgenstein’s solution of going back to using words the way they are ‘usually’ used is nothing more than an ostrich sticking its head in the sand.
So would you not agree that in Wheeler’s experiments the experimenter does not have a choice in the sense that she probably feels she has? She is not able to perform two alternative manoeuvres on the measuring set up. She will perform a manoeuvre, and she may not yet know which, but there are no alternatives possible in this particular instance of the situation ensemble. She is no different from a computer programmed to set the experiment up a particular way before particle went through the slits, contingent on a meteorite not shaking the apparatus after it went through the slits (causality is just as much an issue of what did not happen as what did). So if we think this sort of choosing tells us something important about physics we have misunderstood physics, I beleive.
Nice response. I agree almost down the line.
As far as the meaning of words go, I think that no word can have only one meaning because meaning, like all sense, is not assembled from fragments in isolation, but rather isolated temporarily from the totality of experience. Every word is a metaphor, and metaphor can be dialed in and out of context as dictated by the preference of the interpreter. Even when we are looking at something which has been written, we can argue over whether a chapter means this or that, whether or not the author intended to mean it. We accept that some meanings arise unintentionally within metaphor, and when creating art or writing a book, it is not uncommon to glimpse and develop meanings which were not planned.
To choose has a lower limit, between the personal and the sub-personal which deals with the difference between accidents and ‘on purpose’ where accidents are assumed to demand correction, and there is an upper limit on choice between the personal and the super-personal in which we can calibrate our tolerance toward accidents, possibly choosing to let them be defined as artistic or intuitive and even pursuing them to be developed.
I think that this lensing of choice into upper and lower limits, is, like red and blue shift, a property of physics – of private physics. All experiences, feelings, words, etc can explode into associations if examined closely. All matter can appear as fluctuations of energy, and all energy can appear as changes in the behavior of matter. Reversing the figure-ground relation is a subjective preference. So too is reversing the figure-ground relation of choice and determinism a subjective preference. If we say that our choices are determined, then we must explain why there is a such thing as having a feeling that we choose. Why would there be a difference, for example, in the way that we breathe and the way that we intentionally control our breathing? Why would different areas of the brain be involved in voluntary control, and why would voluntary muscle tissue be different from smooth muscle tissue if there were no role for choice in physics? We have misunderstood physics in that we have misinterpreted the role of our involvement in that understanding.
We see physics as a collection of rules from which experiences follow, but I think that it can only be the other way around. Rules follow from experiences. Physics lags behind awareness. In the case of humans, our personal awareness lags behind our sub-personal awareness (as shown by Libet, etc) but that does not mean that our sub-personal awareness follows microphysical measurables. If you are going to look at the personal level of physics, you only have to recognize that you can intend to stand up before you stand up, or that you can create an opinion intentionally which is a compromise between select personal preferences and the expectations of a social group.
Previous Wittgenstein post here.
If You See Wittgenstein on the Road… (you know what to do)
Me butting into a language based argument about free will:
> I don’t see anything particularly contentious about Wittgenstein’s claim that the meaning of a word lies in how it is used.
Can something (a sound or a spelling) be used as a word if it has no meaning in the first place though?
>After all, language is just an activity in which humans engage in order to influence (and to be influenced by) the behaviour of other humans.
Not necessarily. I imagine that the origin of language has more to do with imitation of natural sounds and gestures. Onomatopoeia, for example. Clang, crunch, crash… these are not arbitrary signs which derive their meaning from usage alone. C. S. Pierce was on the right track with discerning between symbols (arbitrary signs whose meaning is attached by use alone), icons (signs which are isomorphic to their referent), and index (signs which refer by inevitable association as smoke is an index of fire). Words would not develop out of what they feel like to say and to hear, and the relation of that feeling to what is meant.
>I’m inclined to regard his analysis of language in the same light as I regard Hume’s analysis of the philosophical notion of ‘substance’ (and you will be aware that I side with process over substance) – i.e. there is no essential essence to a word. Any particular word plays a role in a variety of different language games, and those various roles are not related by some kind of underlying essence but by what Wittgenstein referred to as a family resemblance. The only pertinent question becomes that of what role a word can be seen to play in a particular language game (i.e. what behavioural influences it has), and this is an empirical question – i.e. it does not necessarily have any metaphysical connotations.
While Wittgenstein’s view is justifiably influential, I think that it belongs to the perspective of modernism’s transition to postmodernity. As such, it is bound by the tenets of existentialism, in which isolation, rather than totality is assumed. I question the validity of isolation when it comes to subjectivity (what I call private physics) since I think that subjectivity makes more sense as a temporary partition, or diffraction within the totality of experience rather than a product of isolated mechanisms. Just as a prism does not produce the visible spectrum by reinventing it mechanically – colors are instead revealed through the diffraction of white light. Much of what goes on in communication is indeed language games, and I agree that words do not have an isolated essence, but that does not mean that the meaning of words is not rooted in a multiplicity of sensible contexts. The pieces that are used to play the language game are not tokens, they are more like colored lights that change colors when they are put together next to each other. Lights which can be used to infer meaning on many levels simultaneously, because all meaning is multivalent/holographic.
> So if I wish to know the meaning of a word, e.g. ‘choice’, I have to LOOK at how the word is USED rather than THINK about what kind of metaphysical scheme might lie behind the word (Philosophical Investigations section 66 and again in section 340).
That’s a good method for learning about some aspects of words, but not others. In some case, as in onomatopoeia, that is the worst way of learning anything about it and you will wind up thinking that Pow! is some kind of commentary about humorous violence and has nothing to do with the *sound* of bodies colliding and it’s emotional impact. It’s like the anthropologist who gets the completely wrong idea about what people are doing because they are reverse engineering what they observe back to other ethnographers interpretations rather than to the people’s experienced history together.
> So, for instance, when Jane asks me “How should I choose my next car?” I understand her perfectly well to be asking about the criteria she should be employing in making her decision. Similarly with the word ‘free’ – I understand perfectly well what it means for a convict to be set free. And so to the term ‘free will’; As Hume pointed out, there is a perfectly sensible way to use the term – i.e. when I say “I did it of my own free will”, all I mean is that I was not coerced into doing it, and I’m conferring no metaphysical significance upon my actions (the compatibilist notion of free will in contrast to the metaphysical notion of free will).
Why would that phrase ‘free will’ be used at all though? Why not just say “I was not coerced” or nothing at all, since without metaphysical (or private physical) free will, there would be no important difference between being coerced by causes within your body or beyond your body. Under determinism, there is no such thing as not being coerced.
> The word ‘will’ is again used in a variety of language games, and the family resemblance would appear to imply something about the future (e.g. “I will get that paper finished today”). When used in the free will language game, it shares a significant overlap with the choice language game. But when we lift a word out of its common speech uses and confer metaphysical connotations upon it, Wittgenstein tells us that language has ceased doing useful work (as he puts it in the PI section 38, “philosophical problems arise when language goes on holiday”).
We should not presume that work is useful without first assuming free will. Useful, like will, is a quality of attention, an aesthetic experience of participation which may be far more important than all of the work in the universe put together. It is not will that must find a useful function, it is function that acquires use only through the feeling of will.
> And, of course, the word ‘meaning’ is itself employed in a variety of different language games – I can say that I had a “meaningful experience” without coming into conflict with Wittgenstein’s claim that the meaning of a word lies in its use.
Use is only one part of meaning. Wittgenstein was looking at a toy model of language that ties only to verbal intellect itself, not to the sensory-motor foundations of pre-communicated experience. It was a brilliant abstraction, important for understanding a lot about language, but ultimately I think that it takes the wrong things too seriously. All that is important about awareness and language would, under the Private Language argument, be passed over in silence.
> Regarding Wheeler’s delayed choice experiment, the experimenter clearly has a choice as to whether she will deploy a detector that ignores the paths by which the light reaches it, or a detector that takes the paths into account. In Wheeler’s scenario that choice is delayed until the light has already passed through (one or both of) the slits. I really can’t take issue with the word ‘choice’ as it is being used here.
I think that QM also will eventually be explained by dropping the assumption of isolation. Light is visual sense. It is how matter sees and looks. Different levels of description present themselves differently from different perspectives, so that if you put matter in the tiniest box you can get, you give it no choice but to reflect back the nature of the limitation of that specific measurement, and measurement in general.
Cross Modal Synesthetic Abstraction
From a worthwhile thread on Quora.
“Below are two shapes. One of them is called Kiki and the other is called Bouba.
Almost all respondents when asked say that the jagged one is kiki and the rounded one is bouba. This can be observed across cultures. This is an innate ability of our brain by which one mode of sensation can cross over into another.”
This is a useful little nugget for MSR. A computer would have to be programmed specifically to correlate the names with the shapes, and such a correlation would be arbitrary from a programmatic perspective. By contrast, our cross-modal, cross-cultural preferences cohere intrinsically, by feel. Feeling is not a collision of objects, it is an aesthetic presence – it is our own participation in a discernment of subjects. The anthropological universality of certain linguistic-phonetic qualities and their association with other kinds of qualities (hard sounds, hard angles, sharp edges, etc) are rooted in deeper universals of sense – deeper than evolution, deeper than matter even. If it didn’t run that deep, (to the absolute bottom/top), then there would be no sense in sense at all. We would be like a computer, linking syntactic fragments together arbitrarily by statistical relevance rather than experiential content.
Likelihood is the ultimate unlikelihood: Notes on sense as sole synthetic a priori manifestation of improbability
In the contemporary Western model of the universe, mechanism is presumed to be the sole synthetic a priori. The general noumenal schema which can only be considered an eternal given and without which no phenomena can arise. In particular, the mechanism of statistical probability is seen as the engine of all possibility. Richard Dawkins title “The Blind Watchmaker” is an apt description – a kind of deism with no deity. Lawrence Krauss’ “A Universe From Nothing” is another apt title. The implication of both is that the universality of statistical distribution is the inevitable and inescapable self-evident truth of all phenomena.
What is overlooked in these models is the nature of probability itself – the concept of likelihood, and indeed the concept of ‘like’. The etymology of the word probable extends from French and Latin meanings of ‘provable’ and ‘agreeable’, a sense of credibility. What we like and what we find acceptable are similar concepts which both relate to, well, similarity. Agreement and likeness are in agreement. The two words are a-like. What is like alikeness though? What is similar to similarity or equivalent to equivalence?
Consider the equal sign. “=” is a visual onomatopoeia. It is a direct icon which looks like what it represents. Two parallel lines which illustrate precise congruence by their relation to each other. It’s an effective sign only because no further description is possible. So ubiquitous is the sense of comparison by similarity that we can’t easily get under it. It simply is the case that one line appears identical to the other, and when something is identical to another thing, we can notice that, and it doesn’t matter if its a thought, feeling, sensation, experience…anything can be similar to something. It could be said also that anything can be similar to anything in some sense. The universe can’t include something which is not similar to the universe in the exact way in which constitutes its inclusion. Inclusion by definition is commonality and commonality is some kind of agreement.
Agreement is not a concept, it is the agent of all coherence, real and imagined – all forms and functions, all things and experiences are coherent precisely because they are ‘like’ other things and experiences, and that there is (to quote David Chalmers) ‘something that it is like’ to experience those phenomena. Without this ontological glue, this associative capacity which all participants in the universe share, there can be no patterns or events, no consistency or parts, only unrelated fragments. That would truly be a universe from nothing, but it would not be a universe.
The question then of where this capacity for agreement comes from is actually moot, since we know that nothing can come from anything which does not already possess this synthetic a priori capacity for inclusion – to cohere as that which seems similar in some sense to itself in spite of dissimilarity in other ways. Something that happens which is similar to something that happened at a different time is said to be happening again. A thing which is similar to another thing in a different location can be said to be ‘the same kind of thing’. This is what consciousness is all about and it is what physics, mathematics, art, philosophy, law, etc are all about. It is what nature is all about. The unity behind multiplicity and the multiplicity behind unity. Indra’s Net, Bohm’s Implicate Order, QM’s vacuum energy, etc, are all metaphors for this same quality…a quality which is embodied as metaphor itself in human psychology. Metaphor is meta-likeness. It links essential likeness across the existential divide. Metaphor bridges the explanatory gap, not by explanation, but by example. Like the = sign, the medium is the message.
Aside from their duty of ‘ferrying-over meaning’ from the public example to private experience and private example to public application, metaphors tell the story of metaphors themselves. Implicitly within each metaphor is the bootstrap code, the instruction set for producing metaphors. Metaphor is the meta-meme and memes are meta-metaphors. This self nesting is a theme (a meme theme, ugh) of sense, and a hint that sense itself is insuperable. Mathematically, you could say that the axiom of foundation is itself a non-well-founded set. The rule of rules does not obey any rules. Regularity is, by definition, the cardinal irregularity, as it can only emerge from its own absence if it emerges at all. If it does not emerge, then is still the cardinal exception to its own regularity since everything else in the universe does emerge from something. First cause then, by being uncaused itself, is the ultimate un-likelihood. First cause by definition is singular and cannot be like anything else and there can be nothing that it is like to be it. At the same time, everything that is not the first cause is like the first cause and there is something that it is like to be that difference from the first cause – some aesthetic dissimilarity which constitutes some sense of partial separation (diffraction).
To get at the probability which is assumed by the Western mindset’s mechanistic universe, we have to begin with the Absolutely improbable. This is akin to realizing that dark is the absence of light when it was formerly assumed that dark was only something which could be added to a light background. Improbability is the fundamental, the synthetic a priori from which commonality is derived. Statistical analysis is a second or third order abstraction, not a primary mechanism. The primary mechanism is likeness itself, not likelihood. Likelihood follows from likeness, which follows from Absolute uniqueness, from the single all-but-impossible Everythingness rather than a plurality of inevitable nothingness.
Why can’t the world have a universal language? Part II
This is more of a comment on Marc Ettlinger‘s very good and thought provoking answer (I have reblogged it here and here). In particular I am interested in why pre-verbal expressions do not diverge in the same way as verbal language. I’m not sure that something like a smile, for instance, is literally universal to every human society, but it seems nearly so, and even extends to other animal species, or so it appears.
What’s interesting to me is that you have this small set of gestures which are even more intimate and personal than verbal signals – more inseparable from identity, which then gets expressed in this interpersonal linguistic way which is at once lower entropy and higher entropy. What I mean is that language has the potential both to carry a more highly articulated, complex meaning, but also to carry more ambiguity than a common gesture.
When a foreigner tries to communicate with a native without having common language, they resort to pre-verbal gestures. Rather than developing that into a universal language, we, as you say, opt for a more proprietary expression of ourselves, our culture, etc… except that in close contact, the gestures would actually be just as personally expressive if not more. There’s all kinds of nuance loaded into that communication, of individual personality as well as social and cultural (and species) identity.
So why do we opt for the polyglot approach for verbal symbols but not for raw emotive gestures? I think that the key is in the nature of boundary between public and private experiences. I think there are two levels of information entropy at work. Something like a grunt or a yell is a very low entropy broadcast on an intro-personal level and a high entropy broadcast on an extra-personal level. If something makes a loud noise at you, whether it’s a person or a bear, the message is clear – “I am not happy with you, go away.”. These primal emotions need not be simple either. Grief, pride, jealousy, betrayal, etc might be quite elusive to define in non-emotional terms, full of complexity and counter-intuitive paradox. If we want to communicate something which is about something other than private states of the interacting parties, however, the grunt or scowl is a very highly entropic vehicle. What’s he yelling about? Enter the linguistic medium.
The human voice is perhaps the most fantastically articulated instrument which Homo sapiens has developed, second only to the cortex itself. The hand is arguably more important perhaps, in the early hominid era, but without the voice, the development of civilization would have undoubtedly stalled. It’s like the paleolithic internet. Mobile, personal yet social, customizable, creative. It’s a spectacular thing to have whether you’re hunting and gathering or settling in for nice long hierarchical management of surplus agricultural production.
The human voice is the bridge between the private identity in a world based on very local and intimate concerns, and a public world of identity multiplicities. To repurpose the lo-fi private yawps and howls with more high fidelity vocalizations requires a trade off between directness and immediacy for a more problematic but intelligent code. One of the key features is that once a word is spoken, it cannot be taken back as easily. A growl can be retracted with a smile, but a word has a ‘point’ to make. It is thermodynamically irreversible. One it has been uttered in public, it cannot be taken back. A decision has been made. A thought has become a thing.
Inscribing language in a written form takes this even one step clearer, and there is a virtuous cycle between thought, speech, actions, and writing which was like the Cambrian explosion for the human psyche. Unlike private gestures which only recur in time, public artifacts, spoken or written, are persistent across space. They become an archeological record of the mind – the library is born. Why can’t the world have a universal language? Because we can’t get rid of the ones that we’ve got already, or at least not until recently. Public artifacts persist spatially. Even immaterial artifacts like words and phrases are spread by human vectors as the settle, migrate, concentrate and disperse.
Because language originates out of public discourse which is local to specific places, events, and people, the aesthetics of the language actually embody the qualities of those events. This is a strange topic, as yet virtually untouched by science, but it is a level of anthropology which has profound implications for the physics of privacy itself – of consciousness. Language is not only identity and communication, I would say that it is also a view of the entire human world. Within language, the history of human culture as a whole rides right along side the feelings and thoughts of individuals, their lives, and their relation with nature as it seemed to them. The power of language to describe, to simulate, and to evoke fiction makes each new word or phrase a kind of celebration. The impact of technology seems to be accelerating both the extension of language and its homogenization. At the same time, as instant translation becomes more a part of our world, the homogenization may suddenly drop off as people are allowed to receive everything in their own language.
Ehh, How Do You Say…
The use of fillers in language are not limited to spoken communication.
In American Sign Language, UM can be signed with open-8 held at chin, palm in, eyebrows down (similar to FAVORITE); or bilateral symmetric bent-V, palm out, repeated axial rotation of wrist (similar to QUOTE).
This is interesting to me because it helps differentiate communication which is unfolding in time and communication which is spatially inscribed. When we speak informally, most people use a some filler words, sounds, and gestures. Some support for embodied cognition theories has come from studies which show that
“Gestural Conceptual Mapping (congruent gestures) promotes performance. Children who used discrete gestures to solve arithmetic problems, and continuous gestures to solve number estimation, performed better. Thus, action supports thinking if the action is congruent with the thinking.”
The effective gestures that they refer to aren’t exactly fillers, because they mimic or indicate conceptual experiences in a full-body experience. The body is used as a literal metaphor. Other gestures however, seem relatively meaningless, like filler. There seems to be levels of filler usage which range in frequency and intensity from the colorful to the neurotic in which generic signs are used as ornament/crutch, or like a carrier tone to signify when the speaker is done speaking, (know’am’sayin?’).
In written language, these fillers are generally only included ironically or to simulate conversational informality. Formal writing needs no filler because there is no relation in real time between participating subjects. The relation with written language was traditionally as an object. The book can’t control whether the reader continues to read or not, so there is no point in gesturing that way. With the advent of real time text communication, we have experimented with emoticons and abbreviations to animate the frozen medium of typed characters. In this article, John McWhorter points out that ‘LOL isn’t funny anymore’ – that it has entered sort of a quasi-filler state where it can mean many different things or not much of anything.
In terms of information entropy, fillers are maximally entropic. Their meaning is uncertain, elastic, irrelevant, but also, and this is cryptic but maybe significant…they point to the meta-conversational level. They refer back to the circumstance of the conversation rather than the conversation itself. As with the speech carrier tone fillers like um… or ehh…, or hand gestures, they refer obliquely to the speaker themselves, to their presence and intent. They are personal, like a signature. Have you ever noticed that when people you have known die that it is their laugh which is most immediately memorable? Or their quirky use of fillers. High information entropy ~ High personal input. Think of your signature compared to typing your name. Again, signatures are occurring in real time, they represent a moment of subjective will being expressed irrevocably. The collapse of information entropy which takes place in formal, traditional writing is a journey from the private perpetual here of subjectivity to the world of public objects. It is a passage* from the inner semantic physics, through initiative or will, striking a thermodynamically irreversible collision with the page. That event, I think, is the true physical nature of public time – instants where private affect is projected as public effect.
Speakers who are not very fluent in a language seem to employ a lot of fillers. For one thing they buy time to think of the right word, and they signal an appeal for patience, not just on a mechanical level (more data to come, please stand by), but on a personal level as well (forgive me, I don’t know how to say…). Is it my imagination or are Americans sort of an exception to the rule, preferring stereotypically to yell words slowly rather than using the ‘ehh’ filler. Maybe that’s not true, but the stereotype is instructive as it implies an association between being pushy and adopting the more impersonal, low-entropy communication style.
This has implications for AI as well. Computers can blink a cursor or rotate an hourglass icon at you, and that does convey some semblance of personhood to us, I think, but is it real? I say no. The computer doesn’t improve its performance by these gestures to you. What we might subtly read as interacting with the computer personally in those hourglass moments is a figment of the Pathetic fallacy rather than evidence of machine sentience. It has a high information entropy in the sense that we don’t know what the computer is doing exactly, if it’s going to lock up or what, but it has no experiential entropy. It is superficially animated and reflects no acknowledgement to the user. Like the book, it is thermodynamically irreversible as far as the user is concerned. We can only wait and hope that it stops hourglassing.
The meanings of filler words in different languages are interesting too. They say things like “you see/you know”, “it means”, “like”, “well”, and “so”. They talk about things being true or actual. “Right?” “OK?”. Acknowledgment of inter-subjective synch with the objective perception. Agreement. Positive feedback. “Do you copy?” relates to “like”…similarity or repetition. Symmetric continuity. Hmm.
*orthomodular transduction to be pretentiously precise
Quora on Memory and Words
Are words like memories and memories like words?
It seems like an odd question. Sort of like asking ‘Are screenplays like entertainment and entertainment like screenplays?’. In a broad sense, everything is like memories. The whole content of the universe could be thought of as the persistence of coherent phenomena through time…discoverable patterns struck within patterns. To narrow it down to human memory gets into different overlapping neurological categories; short term, long term, declarative, implicit, autobiographical, sensory caching, etc. Those are more about our particular phenomenology of pattern recognition and recollection, which seems to be tightly associated with words, but also images, sounds, smells, etc.
I can think of words as like dehydrated experiences, or crystallized pointers to evoke a narrative flow. Memory implies traces of factual experiences in the past, whereas words more often weave a fictional perspective on the past, present, and future. Words are semiotic devices which focus and reflect semantic content through syntactic forms – two different senses of informing which both rely on memory. Perhaps this is where their power to recapitulate sense-making comes from. By presenting a linguistic-symbolic expression which is relatively impersonal coupled with => a proprietary personal motive, a reflection of sensory wholeness is achieved, using the products of sense themselves (optical icons, vocalized sounds) as a body. ‘See what => I’m saying?’ ‘Know what => I mean?’
A word then can be used to encapsulate fragments from any of my personal memories, stories, ideas, texts, knowledge, thoughts etc and through that encapsulation provide the keys to be reconstructed in someone else. The meaning figuratively rides on our shared associations, language, and common sense, so that it is not literally transmitted through space as ‘information’ but rather elides space entirely by a process of local sensory reification. Words can be seen ‘there’ but can only be understood ‘here’.
Memory is also a local understanding but does not require the external-facing symbolic packaging. It doesn’t need to be reified in someone else’s head, only recalled subjectively. Of course we can remember words too, and all words are by definition memories (we have to remember what words the language we intend to speak contains), but memories extend beyond words. The effectiveness of words can obscure our understanding of memory. We are used to seeing the world so much through this logical symbolic process that we tend to see all of consciousness in this light. Memory does not have to be experienced consciously but words do. In fact memory may not have to be experienced at all to be influential. Reading these words for instance is predicated on some implicit memory of how to read English, but to assume that literally implies a database of kindergarten memories being accessed in real time read/writes does not tell the story of our experience. That may be true in one sense, but my hunch is that it works differently also. I think that the memory itself may become an iconic part of who we are, more like looking through colored glass gives us different ways which we can see the world.
Recent Comments