Archive
Prime Spiral
The prime spiral, also known as Ulam’s spiral, is a plot in which the positive integers are arranged in a spiral with primes indicated in some way along the spiral. Unexpected patterns of diagonal lines are apparent in such a plot. This construction was first made by Polish-American mathematician Stanislaw Ulam (1909-1986) in 1963 while doodling during a boring talk at a scientific meeting. While drawing a grid of lines, he decided to number the intersections according to a spiral pattern, and then began circling the numbers in the spiral that were primes. Surprisingly, the circled primes appeared to fall along a number of diagonal straight lines or, in Ulam’s slightly more formal prose, it “appears to exhibit a strongly nonrandom appearance”
In the above variation of the Ulam spiral, red squares represent prime numbers and white squares represent non-primes. Image source.
I can’t decide if I care about prime numbers. On the one hand, the idea of indivisibility is interesting as it relates to consciousness. In some sense, I think that the universe, and each experience of it, is a one-hit-wonder. All appearances of repetition are local to some frame of reference. If someone is color blind, they may see alternating red and green dots as a repeating grey dot. If you listen to someone speaking a language that you can’t understand, it can seem, on some level, that they are saying the same kinds of sounds over and over again.
I wondered if any random constraint would appear to contain pattern when mapped as a spiral like that. This one above is yellow hex if the number spelled out contains the letter i. Writing these out I noticed how the language we use to name the numbers is isomorphic above ten. A trivial observation, I know, but I think that this logical version of onomatopoeia reveals some insights about recursive enumeration, and its foundation in an expectation of the absolutely generic.
To someone who is fascinated by prime numbers (often in a hypnotic, compulsive kind of way, as these spirals suggest), part of the appeal may be that the patterns that they seem to make defy this expectation of generic, interchangeability as the basis of counting. Three isn’t really supposed to be different from two or one, it’s just “the next one after two”. Finding these cosmic Easter eggs by poring over mathematics is, as the movie Pi dramatized, a weird kind of quasi-religious calling. Seeking to sleuth out a hidden intelligence where neither intelligence nor secrecy would seem possible. Numbers are supposed to be universal; publicly accessible. There shouldn’t be any proprietary codes lurking in there.
Maybe there aren’t? Without mapping primes into these spirals to look at with our eyes, the interesting sequences and ratios in mathematics would not be so interesting. Mathematics may provide the most neutral and bland medium possible for the projection of patterns. Like a supernatural oracle or Rorschach inkblot, an ideal medium for pareidolia and conjuring of simulacra from the subconscious.
Math is haunted alright, but by pattern recognition – sense making, rather than Platonic essences. Because math is an ideal conductor and insulator for sense, it does end up reflecting sense in a clear and concise way, however I think it is mainly a reflection. Math is not the heart of the universe, not the whole, but the hole in the whole – a divider which shaves off differences with the power of indifference.
Wittgenstein, Physics, and Free Will
JE: My experience from talking to philosophers is that WIttgenstein’s view is certainly contentious. There seem to be two camps. There are those seduced by his writing who accept his account and there are others who, like me, feel that Wittgenstein expressed certain fairly trivial insights about perception and language that most people should have worked out for themselves and then proceeded to draw inappropriate conclusions and screw up the progress of contemporary philosophy for fifty years. This latter would be the standard view amongst philosophers working on biological problems in language as far as I can see.
Wittgenstein is right to say that words have different meanings in different situations – that should be obvious. He is right to say that contemporary philosophers waste their time using words inappropriately – any one from outside sees that straight away. But his solution – to say that the meaning of words is just how they are normally used, is no solution – it turns out to be a smoke screen to allow him to indulge his own prejudices and not engage in productive explanation of how language actually works inside brains.
The problem is a weaseling going on that, as I indicated before, leads to Wittgenstein encouraging the very crime he thought he was clever to identify. The meaning of a word may ‘lie in how it is used’ in the sense that the occurrences of words in talk is functionally connected to the roles words play in internal brain processes and relate to other brain processes but this is trivial. To say that meaning is use is, as I said, clearly a route to the W crime itself. If I ask how do you know meaning means use you will reply that a famous philosopher said so. Maybe he did but he also said that words do not have unique meanings defined by philosophers – they are used in all sorts of ways and there are all sorts of meanings of meaning that are not ‘use’, as anyone who has read Grice or Chomsky will have come to realise. Two meanings of a word may be incompatible yet it may be well nigh impossible to detect this from use – the situation I think we have here. The incompatibility only becomes clear if we rigorously explore what these meanings are. Wittgenstein is about as much help as a label on a packet of pills that says ‘to be taken as directed’.
But let’s be Wittgensteinian and play a language game of ordinary use, based on the family resemblance thesis. What does choose mean? One meaning might be to raise in the hearer the thought of having a sense of choosing. So a referent of ‘choose’ is an idea or experience that seems to be real and I think must be. But we were discussing what we think that sense of choosing relates to in terms of physics. We want to use ‘choose’ to indicate some sort of causal relation or an aspect of causation, or if we are a bit worried about physics still having causes we could frame it in terms of dynamics or maybe even just connections in a spacetime manifold. If Wheeler thinks choice is relevant to physics he must think that ‘choose’ can be used to describe something of this sort, as well as the sense of choosing.
So, as I indicated, we need to pin down what that dynamic role might be. And I identified the fact that the common presumption about this is wrong. It is commonly thought that choosing is being in a situation with several possible outcomes. However, we have no reason to think that. The brain may well not be purely deterministic in operation. Quantum indeterminacy may amplify up to the level of significant indeterminacy in such a complex system with so powerful amplification systems at work. However, this is far from established and anyway it would have nothing to do with our idea of choosing if it was just a level of random noise. So I think we should probably work on the basis that the brain is in fact as tightly deterministic as matters here. This implies that in the situation where we feel we are choosing THERE IS ONLY ONE POSSIBLE OUTCOME.
The problem, as I indicated is that there seem to be multiple possible outcomes to us because we do not know how are brain is going to respond. Because this lack of knowledge is a standard feature of our experience our idea of ‘a situation’ is better thought of as ‘an example of an ensemble of situations that are indistinguishable in terms of outcome’. If I say when I get to the main road I can turn right or left I am really saying that I predict an instance of an ensemble of situations which are indistinguishable in terms of whether I go right or left. This ensemble issue of course is central to QM and maybe we should not be so surprised about that – operationally we live in a world of ensembles, not of specific situations.
So this has nothing to do with ‘metaphysical connotations’ which is Wittgenstein’s way of blocking out any arguments that upset him – where did we bring metaphysics in here? We have two meanings of choose. 1. Being in a situation that may be reported as being one of feeling one has choice (to be purely behaviourist) and 2. A dynamic account of that situation that turns out not to agree with what 99.9% of the population assume it is when they feel they are choosing. People use choose in a discussion of dynamics as if it meant what it feels like in 1 but the reality is that this use is useless. It is a bit like making burnt offerings to the Gods. That may be a use for goats but not a very productive one. It turns out that the ‘family resemblance’ is a fake. Cousin Susan who has pitched up to claim her inheritance is an impostor. That is why I say that although to ‘feel I am choosing’ is unproblematic the word ‘choice’ has no useful meaning in physics. It is based on the same sort of error as thinking a wavefunction describes a ‘particle’ rather than an ensemble of particles. The problem with Wittgenstein is that he never thought through where his idea of use takes you if you take a careful scientific approach. Basically I think he was lazy. The common reason why philosophers get tied in knots with words is this one – that a word has several meanings that do not in fact have the ‘family relations’ we assume they have – this is true for knowledge, perceiving, self, mind, consciousness – all the big words in this field. Wittgenstein’s solution of going back to using words the way they are ‘usually’ used is nothing more than an ostrich sticking its head in the sand.
So would you not agree that in Wheeler’s experiments the experimenter does not have a choice in the sense that she probably feels she has? She is not able to perform two alternative manoeuvres on the measuring set up. She will perform a manoeuvre, and she may not yet know which, but there are no alternatives possible in this particular instance of the situation ensemble. She is no different from a computer programmed to set the experiment up a particular way before particle went through the slits, contingent on a meteorite not shaking the apparatus after it went through the slits (causality is just as much an issue of what did not happen as what did). So if we think this sort of choosing tells us something important about physics we have misunderstood physics, I beleive.
Nice response. I agree almost down the line.
As far as the meaning of words go, I think that no word can have only one meaning because meaning, like all sense, is not assembled from fragments in isolation, but rather isolated temporarily from the totality of experience. Every word is a metaphor, and metaphor can be dialed in and out of context as dictated by the preference of the interpreter. Even when we are looking at something which has been written, we can argue over whether a chapter means this or that, whether or not the author intended to mean it. We accept that some meanings arise unintentionally within metaphor, and when creating art or writing a book, it is not uncommon to glimpse and develop meanings which were not planned.
To choose has a lower limit, between the personal and the sub-personal which deals with the difference between accidents and ‘on purpose’ where accidents are assumed to demand correction, and there is an upper limit on choice between the personal and the super-personal in which we can calibrate our tolerance toward accidents, possibly choosing to let them be defined as artistic or intuitive and even pursuing them to be developed.
I think that this lensing of choice into upper and lower limits, is, like red and blue shift, a property of physics – of private physics. All experiences, feelings, words, etc can explode into associations if examined closely. All matter can appear as fluctuations of energy, and all energy can appear as changes in the behavior of matter. Reversing the figure-ground relation is a subjective preference. So too is reversing the figure-ground relation of choice and determinism a subjective preference. If we say that our choices are determined, then we must explain why there is a such thing as having a feeling that we choose. Why would there be a difference, for example, in the way that we breathe and the way that we intentionally control our breathing? Why would different areas of the brain be involved in voluntary control, and why would voluntary muscle tissue be different from smooth muscle tissue if there were no role for choice in physics? We have misunderstood physics in that we have misinterpreted the role of our involvement in that understanding.
We see physics as a collection of rules from which experiences follow, but I think that it can only be the other way around. Rules follow from experiences. Physics lags behind awareness. In the case of humans, our personal awareness lags behind our sub-personal awareness (as shown by Libet, etc) but that does not mean that our sub-personal awareness follows microphysical measurables. If you are going to look at the personal level of physics, you only have to recognize that you can intend to stand up before you stand up, or that you can create an opinion intentionally which is a compromise between select personal preferences and the expectations of a social group.
Previous Wittgenstein post here.
If You See Wittgenstein on the Road… (you know what to do)
Me butting into a language based argument about free will:
> I don’t see anything particularly contentious about Wittgenstein’s claim that the meaning of a word lies in how it is used.
Can something (a sound or a spelling) be used as a word if it has no meaning in the first place though?
>After all, language is just an activity in which humans engage in order to influence (and to be influenced by) the behaviour of other humans.
Not necessarily. I imagine that the origin of language has more to do with imitation of natural sounds and gestures. Onomatopoeia, for example. Clang, crunch, crash… these are not arbitrary signs which derive their meaning from usage alone. C. S. Pierce was on the right track with discerning between symbols (arbitrary signs whose meaning is attached by use alone), icons (signs which are isomorphic to their referent), and index (signs which refer by inevitable association as smoke is an index of fire). Words would not develop out of what they feel like to say and to hear, and the relation of that feeling to what is meant.
>I’m inclined to regard his analysis of language in the same light as I regard Hume’s analysis of the philosophical notion of ‘substance’ (and you will be aware that I side with process over substance) – i.e. there is no essential essence to a word. Any particular word plays a role in a variety of different language games, and those various roles are not related by some kind of underlying essence but by what Wittgenstein referred to as a family resemblance. The only pertinent question becomes that of what role a word can be seen to play in a particular language game (i.e. what behavioural influences it has), and this is an empirical question – i.e. it does not necessarily have any metaphysical connotations.
While Wittgenstein’s view is justifiably influential, I think that it belongs to the perspective of modernism’s transition to postmodernity. As such, it is bound by the tenets of existentialism, in which isolation, rather than totality is assumed. I question the validity of isolation when it comes to subjectivity (what I call private physics) since I think that subjectivity makes more sense as a temporary partition, or diffraction within the totality of experience rather than a product of isolated mechanisms. Just as a prism does not produce the visible spectrum by reinventing it mechanically – colors are instead revealed through the diffraction of white light. Much of what goes on in communication is indeed language games, and I agree that words do not have an isolated essence, but that does not mean that the meaning of words is not rooted in a multiplicity of sensible contexts. The pieces that are used to play the language game are not tokens, they are more like colored lights that change colors when they are put together next to each other. Lights which can be used to infer meaning on many levels simultaneously, because all meaning is multivalent/holographic.
> So if I wish to know the meaning of a word, e.g. ‘choice’, I have to LOOK at how the word is USED rather than THINK about what kind of metaphysical scheme might lie behind the word (Philosophical Investigations section 66 and again in section 340).
That’s a good method for learning about some aspects of words, but not others. In some case, as in onomatopoeia, that is the worst way of learning anything about it and you will wind up thinking that Pow! is some kind of commentary about humorous violence and has nothing to do with the *sound* of bodies colliding and it’s emotional impact. It’s like the anthropologist who gets the completely wrong idea about what people are doing because they are reverse engineering what they observe back to other ethnographers interpretations rather than to the people’s experienced history together.
> So, for instance, when Jane asks me “How should I choose my next car?” I understand her perfectly well to be asking about the criteria she should be employing in making her decision. Similarly with the word ‘free’ – I understand perfectly well what it means for a convict to be set free. And so to the term ‘free will’; As Hume pointed out, there is a perfectly sensible way to use the term – i.e. when I say “I did it of my own free will”, all I mean is that I was not coerced into doing it, and I’m conferring no metaphysical significance upon my actions (the compatibilist notion of free will in contrast to the metaphysical notion of free will).
Why would that phrase ‘free will’ be used at all though? Why not just say “I was not coerced” or nothing at all, since without metaphysical (or private physical) free will, there would be no important difference between being coerced by causes within your body or beyond your body. Under determinism, there is no such thing as not being coerced.
> The word ‘will’ is again used in a variety of language games, and the family resemblance would appear to imply something about the future (e.g. “I will get that paper finished today”). When used in the free will language game, it shares a significant overlap with the choice language game. But when we lift a word out of its common speech uses and confer metaphysical connotations upon it, Wittgenstein tells us that language has ceased doing useful work (as he puts it in the PI section 38, “philosophical problems arise when language goes on holiday”).
We should not presume that work is useful without first assuming free will. Useful, like will, is a quality of attention, an aesthetic experience of participation which may be far more important than all of the work in the universe put together. It is not will that must find a useful function, it is function that acquires use only through the feeling of will.
> And, of course, the word ‘meaning’ is itself employed in a variety of different language games – I can say that I had a “meaningful experience” without coming into conflict with Wittgenstein’s claim that the meaning of a word lies in its use.
Use is only one part of meaning. Wittgenstein was looking at a toy model of language that ties only to verbal intellect itself, not to the sensory-motor foundations of pre-communicated experience. It was a brilliant abstraction, important for understanding a lot about language, but ultimately I think that it takes the wrong things too seriously. All that is important about awareness and language would, under the Private Language argument, be passed over in silence.
> Regarding Wheeler’s delayed choice experiment, the experimenter clearly has a choice as to whether she will deploy a detector that ignores the paths by which the light reaches it, or a detector that takes the paths into account. In Wheeler’s scenario that choice is delayed until the light has already passed through (one or both of) the slits. I really can’t take issue with the word ‘choice’ as it is being used here.
I think that QM also will eventually be explained by dropping the assumption of isolation. Light is visual sense. It is how matter sees and looks. Different levels of description present themselves differently from different perspectives, so that if you put matter in the tiniest box you can get, you give it no choice but to reflect back the nature of the limitation of that specific measurement, and measurement in general.
Wittgenstein in Wonderland, Einstein under Glass
If I understand the idea correctly – that is, if there is enough of the idea which is not private to Ludwig Wittgenstein that it can be understood by anyone in general or myself in particular, then I think that he may have mistaken the concrete nature of experienced privacy for an abstract concept of isolation. From Philosophical Investigations:
The words of this language are to refer to what can be known only to the speaker; to his immediate, private, sensations. So another cannot understand the language. – http://plato.stanford.edu/entries/private-language/
To begin with, craniopagus (brain conjoined) twins, do actually share sensations that we would consider private.
The results of the test did not surprise the family, who had long suspected that even when one girl’s vision was angled away from the television, she was laughing at the images flashing in front of her sister’s eyes. The sensory exchange, they believe, extends to the girls’ taste buds: Krista likes ketchup, and Tatiana does not, something the family discovered when Tatiana tried to scrape the condiment off her own tongue, even when she was not eating it.
There should be no reason that it would not be technologically feasible to eventually export the connectivity which craniopagus twins experience through some kind of neural implant or neuroelectric multiplier. There are already computers that can be controlled directly through the brain.
Brain-computer interfaces that monitor brainwaves through EEG have already made their way to the market. NeuroSky’s headset uses EEG readings as well as electromyography to pick up signals about a person’s level of concentration to control toys and games (see “Next-Generation Toys Read Brain Waves, May Help Kids Focus”). Emotiv Systems sells a headset that reads EEG and facial expression to enhance the experience of gaming (see “Mind-Reading Game Controller”).
All that would be required in principle would be to reverse the technology to make them run in the receiving direction (computer>brain) and then imitate the kinds of neural connections which brain conjoined twins have that allow them to share sensations. The neural connections themselves would not be aware of anything on a human level, so it would not need to be public in the sense that sensations would be available without the benefit of a living human brain, only that the awareness could, to some extent, incite a version of itself in an experientially merged environment.
Because of the success and precision of science has extended our knowledge so far beyond our native instruments, sometimes contradicting them successfully, we tend to believe that the view that diagnostic technology provides is superior to, or serves as a replacement for our own awareness. While it is true that our own experience cannot reveal the same kinds of things that an fMRI or EEG can, I see that as a small detail compared to the wealth of value that our own awareness provides about the brain, the body, and the worlds we live in. Natural awareness is the ultimate diagnostic technology. Even though we can certainly benefit from a view outside of our own, there’s really no good reason to assume that what we feel, think, and experience isn’t a deeper level of insight into the nature of biochemical physics than we could possibly gain otherwise. We are evidence that physics does something besides collide particles in a void. Our experience is richer, smarter, and more empirically factual than what an instrument outside of our body can generate on its own. The problem is that our experience is so rich and so convoluted with private, proprietary knots, that we can’t share very much of it. We, and the universe, are made of private language. It is the public reduction of privacy which is temporary and localized…it’s just localized as a lowest common denominator.
While It is true that at this stage in our technical development, subjective experience can only be reported in a way which is limited by local social skills, there is no need to invoke a permanent ban on the future of communication and trans-private experience. Instead of trying to report on a subjective experience, it could be possible to share that experience through a neurological interface – or at least to exchange some empathic connection that would go farther than public communication.
If I had some psychedelic experience which allowed me to see a new primary color, I can’t communicate that publicly. If I can just put on a device that allows our brains to connect, then someone else might be able to share the memory of what that looked like.
It seems to me that Wittgenstein’s private language argument (sacrosanct as it seems to be among the philosophically inclined) assumes privacy as identical to isolation, rather than the primordial identity pansensitivty which I think it could be. If privacy is accomplished as I suggest, by the spatiotemporal ‘masking’ of eternity, than any experience that can be had is not a nonsense language to be ‘passed over in silence’, but rather a personally articulated fragment of the Totality. Language is only communication – intellectual measurement for sharing public-facing expressions. What we share privately is transmeasurable and inherently permeable to the Totality beneath the threshold of intellect.
Said another way, everything that we can experience is already shared by billions of neurons. Adding someone else’s neurons to that group should indeed be only a matter of building a synchronization technology. If, for instance, brain conjoined twins have some experience that nobody else has (like being the first brain conjoined twins to survive to age 40 or something), then they already share that experience, so it would no longer be a ‘private language’. The true future of AI may not be in simulating awareness as information, but in using information to share awareness. Certainly the success of social networking and MMPGs has shown us that what we really want out of computers is not for them to be us, but for us to be with each other in worlds we create.
I propose that rather than beginning from the position of awareness being a simulation to represent a reality that is senseless and unconscious, we should try assuming that awareness itself is the undoubtable absolute. I would guess that each kind of awareness already understands itself far better than we understand math or physics, it is only the vastness of human experience which prevents that understanding to be shared on all levels of itself, all of the time.
The way to understand consciousness would not be to reduce it to a public language of physics and math, since our understanding of our public experience is itself robotic and approximated by multiple filters of measurement. To get at the nature of qualia and quanta requires stripping down the whole of nature to Absolute fundamentals – beyond language and beyond measurement. We must question sense itself, and we must rehabilitate our worldview so that we ourselves can live inside of it. We should seek the transmeasurable nature of ourselves, not just the cells of our brain or the behavioral games that we have evolved as one particular species in the world. The toy model of consciousness provided by logical positivism and structural realism is, in my opinion, a good start, but in the wrong direction – a necessary detour which is uniquely (privately?) appropriate to a particular phase of modernism. To progress beyond that I think requires making the greatest cosmological 180 since Galileo. Einstein had it right, but he did not generalize relativity far enough. His view was so advanced in the spatialization of time and light that he reduced awareness to a one dimensional vector. What I think he missed, is that if we begin with sensitivity, then light becomes a capacity with which to modulate insensitivity – which is exactly what we see when we share light across more than one slit – a modulation of masked sensitivity shared by matter independently of spacetime.
Why Computers Can’t Lie and Don’t Know Your Name
What do the Hangman Paradox, Epimenides Paradox, and the Chinese Room Argument have in common?
The underlying Symbol Grounding Problem common to all three is that from a purely quantitative perspective, a logical truth can only satisfy some explicitly defined condition. The expectation of truth itself being implicitly true, (i.e. that it is possible to doubt what is given) is not a condition of truth, it is a boundary condition beyond truth*. All computer malfunctions, we presume, are due to problems with the physical substrate, or the programmer’s code, and not incompetence or malice. The computer, its program, or binary logic in general cannot be blamed for trying to mislead anyone. Computation, therefore, has no truth quality, no expectation of validity or discernment between technical accuracy and the accuracy of its technique. The whole of logic is contained within the assumption that logic is valid automatically. It is an inverted mirror image of naive realism. Where a person can be childish in their truth evaluation, overextending their private world into the public domain, a computer is robotic in its truth evaluation, undersignifying privacy until it is altogether absent.
Because computers can only report a local fact (the position of a switch or token), they cannot lie intentionally. Lying involves extending a local fiction to be taken as a remote fact. When we lie, we know what a computer cannot guess – that information may not be ‘real’.
When we say that a computer makes an error, it is only because of a malfunction on the physical or programmatic level, therefore it is not false, but a true representation of the problem in the system which we receive as an error. It is only incorrect in some sense that is not local to the machine, but rather local to the user, who makes the mistake of believing that the output of the program is supposed to be grounded in their expectations for its function. It is the user who is mistaken.
It is for this same reason that computers cannot intend to tell the truth either. Telling the truth depends on an understanding of the possibility of fiction and the power to intentionally choose the extent to which the truth is revealed. The symbolic communication expressed is grounded strongly in the privacy of the subject as well as the public context, and only weakly grounded in the logic represented by the symbolic abstraction. With a computer, the hierarchy is inverted. A Turing Machine is independent of private intention and public physics, so it is grounded absolutely in its own simulacra. In Searle’s (much despised) Chinese Room Argument – the conceit of the decomposed translator exposes how the output of a program is only known to the program in its own narrow sensibility. The result of the mechanism is simply a true report of a local process of the machine which has no implicit connection to any presented truths beyond the machine…except for one: Arithmetic truth.
Arithmetic truth is not local to the machine, but it is local to all machines and all experiences of correct logical thought. This is an interesting symmetry, as the logic of mechanism is both absolutely local and instantaneous and absolutely universal and eternal, but nothing in between. Every computed result is unique to the particular instantiation of the machine or program, and universal as a Turing emulable template. What digital analogs are not is true or real any sense which relates expressly to real, experienced events in space time. This is the insight expressed in Korzybski’s famous maxim ‘The map is not the territory.’ and in the Use-Mention distinction, where using a word intentionally is understood to be distinct from merely mentioning the word as an object to be discussed. For a computer, there is no map-territory distinction. It’s all one invisible, intangible mapitory of disconnected digital events.
By contrast, a person has many ways to voluntarily discern territories and maps. They can be grouped together, such as when the acoustic territory of sound is mapped to the emotional-lyric territory of music, or the optical territory of light is mapped as the visual territory of color and image. They can be flipped so that the physics is mapped to the phenomenal as well, which is how we control the voluntary muscles of our body. For us, authenticity is important. We would rather win the lottery than just have a dream that we won the lottery. A computer does not know the difference. The dream and the reality are identical information.
Realism, then, is characterized by its opposition to the quantitative. Instead of being pegged to the polar austerity which is autonomous local + explicitly universal, consciousness ripens into the tropical fecundity of middle range. Physically real experience is in direct contrast to digital abstraction. It is semi-unique, semi-private, semi-spatiotemporal, semi-local, semi-specific, semi-universal. Arithmetic truth lacks any non-functional qualities, so that using arithmetic to falsify functionalism is inherently tautological. It is like asking an armless man to raise his hand if he thinks he has no arms.
Here’s some background stuff that relates:
The Hangman Paradox has been described as follows:
A judge tells a condemned prisoner that he will be hanged at noon on one weekday in the following week but that the execution will be a surprise to the prisoner. He will not know the day of the hanging until the executioner knocks on his cell door at noon that day.Having reflected on his sentence, the prisoner draws the conclusion that he will escape from the hanging. His reasoning is in several parts. He begins by concluding that the “surprise hanging” can’t be on Friday, as if he hasn’t been hanged by Thursday, there is only one day left – and so it won’t be a surprise if he’s hanged on Friday. Since the judge’s sentence stipulated that the hanging would be a surprise to him, he concludes it cannot occur on Friday.He then reasons that the surprise hanging cannot be on Thursday either, because Friday has already been eliminated and if he hasn’t been hanged by Wednesday night, the hanging must occur on Thursday, making a Thursday hanging not a surprise either. By similar reasoning he concludes that the hanging can also not occur on Wednesday, Tuesday or Monday. Joyfully he retires to his cell confident that the hanging will not occur at all.The next week, the executioner knocks on the prisoner’s door at noon on Wednesday — which, despite all the above, was an utter surprise to him. Everything the judge said came true.
1) The conclusion “I won’t be surprised to be hanged Friday if I am not hanged by Thursday” creates another proposition to be surprised about. By leaving the condition of ‘surprise’ open ended, it could include being surprised that the judge lied, or any number of other soft contingencies that could render an ‘unexpected’ outcome. The condition of expectation isn’t an objective phenomenon, it is a subjective inference. Objectively, there is no surprise since objects don’t anticipate anything.
2) If we want to close in tightly on the quantitative logic of whether deducibility can be deduced – given five coin flips and a certainty that one will be heads, each successive tails coin flip increases the odds that one the remaining flips will be heads. The fifth coin will either be 100% likely to be heads, or will prove that the certainty assumed was 100% wrong.
I think the paradox hinges on 1) the false inference of objectivity in the use of the word surprise and 2) the false assertion of omniscience by the judge. It’s like an Escher drawing. In real life, surprise cannot be predicted with certainty and the quality of unexpectedness it is not an objective thing, just as expectation is not an objective thing.
Connecting the dots, expectation, intention, realism, and truth are all rooted in the firmament of sensory-motive participation. To care about what happens cannot be divorced from our causally efficacious role in changing it. It’s not just a matter of being petulant or selfish. The ontological possibility of ‘caring’ requires letters that are not in the alphabet of determinism and computation. It is computation which acts as punctuation, spelling, and grammar, but not language itself. To a computer, every word or name is as generic as a number. They can store the string of characters that belong to what we call a name, but they have no way to really recognize who that name belongs to.
*I maintain that what is beyond truth is sense: direct phenomenological participation
Eigenmorphism: The Politics of Pansensitive Entanglement
Eigenmorphism is a neologism which refers to a hypothesis about fundamental laws of how natural phenomena persist in relation to each other. The thesis draws on some principles of General and Special Relativity, Quantum Mechanics, and semiotics, to integrate phenomenal awareness and physics at a fundamental and ontological level. In philosophy of mind, the idea that awareness is something like a fundamental force coexisting with other forces of physics in some way is known as panpsychism, however, within Multisense Realism, the conjecture that is used is an even more radical one. For reasons explained later, MSR uses the term pansensitivity rather than panpsychism, and it conceives of forces of physics to be second order divergences from the fundamental and irreducible capacity which is assumed to act as the common parent of all action and all being, all feeling and all knowing. This is not to be confused with a creationist account or theism, as it does not assume a single being who is human like and feels all and does all, rather primordial identity pansensitivity (PIP) is a weaker assertion that claims only that it makes more sense to view the cosmos, or at least the sense that the cosmos makes to itself, as originating from an agenda which is aesthetic and participatory rather than one which is automatic and functionalist.
Eigenmorphism is used here to explain how phenomena in general are presented and translated and to each other at an ontological level, thus the “identity” of primordial identity pansensitivity is that the realism of nested sensory presentation is identical to existence. Eigenmorphism attempts to point out a single pattern of diffraction and calibration through which presentations and representations are privatized and generalized, both locally and universally. I have used David Chalmers paper, The Combination Problem of Panpsychism as a jumping off point for applying PIP to the problems of binding and combining of subjective experience. The linked article provides an excellent discussion of the issues surrounding panpsychism, and how it is that physical and phenomenal states might coexist at a fundamental level. The central focus of his paper is to clarify the various schools of thought on how microphysical and or microphenomenal states might combine and relate to so called macrophysical and macrophenomenal states. He writes:
“The combination problem for panpsychism is: how can microphenomenal properties combine to yield macrophenomenal properties? […] The combination problem can be broken down into at least three subproblems, […] These three aspects yield what we might call the subject combination problem , the quality combination problem, and the structure combination problem.
[…]The subject combination problem is roughly: how do microsubjects combine to yield macro-subjects? Here microsubjects are microphysical subjects of experience, and macrosubjects are macroscopic subjects of experience such as ourselves. […] An especially pressing aspect of the subject combination problem is the subject-summing problem [in principle it seems that a macrosubject would not necessarily emerge from microsubjects].
[…]The quality combination problem is roughly: how do microqualities combine to yield macroqualities? Here macroqualities are specific phenomenal qualities such as phenomenal redness (what it is like to see red), phenomenal greenness, and so on. It is natural to suppose that microexperience involves microqualities, which might be primitive analogs of macroqualities. How do these combine? An especially pressing aspect of the quality combination problem is what we might call the palette problem [..] How can this limited palette of microqualities combine to yield the vast array of macroqualities?
[…]The structure combination problem is roughly: how does microexperiential structure (and microphysical structure) combine to yield macroexperiential structure? Our macroexperience has a rich structure, involving the complex spatial structure of visual and auditory fields, a division into many different modalities, and so on. How can the structure in microexperience and microstructure yield this rich structure? An especially pressing aspect of the structure combination problem is the structural mismatch problem. Microphysical structure (in the brain, say) seems entirely different from the macrophenomenal structure we experience. “
Panpsychism has already suffered from a somewhat dubious reputation in the past, perhaps because it is often conceived of in simplistic terms by those unfamiliar with it. In many minds, panpsychism is presumed to imply a cartoonish idea of nature which imbues every speck of dust or atom with a human-like mind. While there may be no philosophical justification to rule out such a view, I think that all of the common forms of panpsychism offer far more sophisticated ideas. I would consider any view which disregards the primitive nature of microphysical systems relative to macrophenomenal states to be more of an anthropomorphic panpsychism; what I call pananthropism, There are weaker forms of panpsychism, such as panexperientialism or panprotoexperientialism which do honor the difference in complexity between micro and macro scale phenomena, but these forms also dilute the effectiveness in resolving the Hard Problem of Consciousness. If we say that microphenomenal states aren’t really phenomenal or subjective, then we still are faced with having to explain why and how they become that way on the macro, human scale.
In between the two extremes, I introduce the word ‘pansensitivity‘, which posits a universal minimal capacity for sense in naturally presented phenomena (not phenomenal representations). This sensitivity operates within its own scale and inertial frame of reference, and need not be very similar to human consciousness. Inertial frame is intended literally as well, as part of the hypothesis includes the idea that experiences themselves accumulate and take on a kind of gravitation-like tropism, similar to Rupert Sheldrake’s Morphic Resonance and David Bohm’s Implicate Order, which I refer to within MSR as Solitrophy, If solitrophy is the world builder, then significance (the nesting of representation within sensed presence) is its bricks and mortar.
Pansensitivity need not have a subjective or self-like quality, only a univeral commonality of being and doing which is rooted in sensory-motor participation. By emphasizing the primacy of perception and participation as the heart of all possible experience, pansensitivity lays the foundation for a full scale integration with physical conjugates such as mass-energy, space-time, electro-magnetism. Additional conjugates from information science and mathematics integrate smoothly as well, such as form-function, signal-noise, geometry-algebra, and ordinality-cardinality. The consequence of moving ‘down’ from panpsychism to pansensitivity would be the loss of the anthropomorphic baggage imposed on primitive phenomena, and the consequence of moving ‘up’ from panprotoexperientialism would be to un-ask the Hard Problem on the micro-level.
Primordial Pansensitivity
Starting from the hypothesis that all phenomena are sensed or sensing phenomena*, and that nothing can said to be exist beyond the scope of sense, the entire Combination Problem is turned on its head to become one of breaking apart rather than merging together. Because all distinctions of micro and macro, phenomenal and physical are subsumed within the absolute primordialism of pansensitivity, we must employ a different way of thinking about the Combination Problem entirely. This revision of thought extends to a re-imagining of some of the underpinnings of mathematics and cosmology. In all respects, where recent Western views assumes a universe from nothing, or an arithmetic beginning with zero, primordial pansensitivity supposes the opposite perspective; a multiplicity carved out of unity, a near infinity of quantities diffracted as ratios within the number one. Separation becomes the derived local condition, while singularity is the absolute fundamental condition. It is not ‘a singularity’, or ‘a universe’, it is The singularity, and The universe.
Because the orientation of this model flips the traditional ranking of physics and phenomenology, and awareness is the sole defining principle, Professor Chalmers three subproblems would have to be restated to assume divergence rather than emergence. Emergence would be the local appearance of diffraction of the single whole rather than combination of isolated parts. At this level, PIP can be considered a form of idealism, in that the head end of the Ouroboros is phenomenal presence and the tail end is diffracted by time (subjective self), space (objective matter), and sense itself (represented information), however the very categorization of sense as ‘ideal’ is a materialistic bias which draws on Platonic notions of information supremacy rather than the sensory supremacy envisioned by PIP/MSR. Sense is not an ideal, it is concrete. It moves bodies and births galaxies. The sense of the human intellect is idealizing. Our mental life is a special case as far as we know. The rest of the universe does not seem to strive for perfection, it simply presents itself as perfect or imperfect by default. The human mindscape, by contrast, is often fixated on perfecting forms and functions, removing entropy from signal.
The Genius of Palette
When we consider the relation of the colors of the visible spectrum to white light, we can get a sense of how singularity and multiplicity coexist qualitatively, and how that coexistence differs from quantitative-logical structures. The difference between projected light and reflected color is instructive. As we know, converging three spotlights of competing colors gets us closer to white in the overlap, while mixing three paints of different colors gets us closer to grey or black. Similar displays of order can be found in the other senses as well, with harmonic progressions and white noise within sound, and other symmetric patterns which circumscribe the palette of olfactory sense. The palette of the color wheel, however, is uniquely suited to modeling aspects of sense combination. We might as why that is. What makes vision seem more fully exposed to us than something like smell? How does the neurological emphasis on visual sensitivity translate into this ‘seeing is believing’ sense of trust?
In particular, the color wheel or visible spectrum presents two themes within palette formulation. The first I will call the prospectively sensible theme. If we had never seen color before, and were presented for the first time with green and blue paint, it seems plausible that we could imagine a color in between green and blue as being turquoise or cyan. If we were presented with red and green paint instead, it seems completely implausible that anyone could imagine the existence of the color yellow. Yellow is not ‘prospectively’ sensible. Once we see yellow however, and see the flow of the visible spectrum as it progresses smoothly from red to orange to yellow to green, the quality of yellow seems to fit in perfectly, so it is retrospectively sensible, In this example, cyan has both prospective and retrospective sensibility, but yellow only has retrospective sensibility. This gives the origin of yellow an unprecedented quality. I call this idiopathic property, which is common to all sense palettes, the ‘genius’ of the palette. The genius provides tentpoles, primary differences in kind from which secondary and tertiary differences of kind blend seamlessly into a multiplicity of differences in degree. There is a view of the territory of the pansensitivity’s version of the Combination Problem (or primary Divergence Problem).
The problem of the origin of palette genius is still an issue, but it is an issue which is diffused somewhat by the totality and unity of primordial sense, and the inversion of our expectation of nothingness rather than primordial everythingness. These genius qualia are manifestations of sense which may be more primitive than spacetime itself, so that the ingression of spacetime leaves certain critical pieces to the puzzle missing. Simply stated, the whole idea of causality and origin depends on time and sequence, so that these primary colors and sensations are as fundamental as sequence itself, and as any question that we can ask about it. Questions themselves are presumably no more fundamental than these elemental experiences.
Combinatory Eigenmorphism
Once a palette of sense has diverged and multiplied into spacetime availibility, it is proposed that the role of subjective participants is to recover unity and simultaneity, completing a kind of sensory-semantic conservation cycle (which is also a palette of sense). The hypothesis of combinatory eigenmorphism is that the relation between any and all phenomenal experiences, whether they are cognitive, perceptual, or physical, can be characterized by specific categorical differences which are themselves ordered in a sensible schema.
Borrowing the eigen- prefix, used in terms such as eigenstate and eigenvector, and the root ‘morph’ as it is used in isomorphism and homomorphism, eigenmorphism is intended to describe an ordered set of elementary mappings within a closed continuum of possible mappings. Comparing two compasses, for example, the closed continuum of possible mappings would be the 360 x 360 degree matrix of possible needle direction combinations. Only one type of combination (a group of 360 out of the total 129,600 combinations) would be isomorphic, with both needles facing the same direction. Another group of 360 would be anti-isomorphic (one compass needle points North and the other points South). In between these two poles would be various shades and angles of disagreement. The right angles would be in perpendicular disagreement to both the isomorphic and anti-isomorphic combinations.
The use of eigenmorphism here is not intended as a mathematical abstraction, however. There may be more precise terms within algebra or geometry to describe such a rotating cycle of polarization stages, but the point of using morphism here is not to limit the combinations to one dimensional differences. Unlike a geometric degree or radian, this usage of morphism must apply to every kind of difference between any phenomena, not just to differences in shapes or orientation. This is potentially possible because of the holism of primordial pansensitivity. The divergence of every singularity into multiplicity can be described as tectonic – every diffracted palette and diffraction within a palette is like Pangea, breaking into continents which fit each other like puzzle pieces. Adages like ‘as above, so below’, and ‘opposites attract’ can be grounded in this foundational continuity.
Eigenmorphism must, therefore, apply not just to mathematical transformations, but to fully realized sense experiences, complete with personal participation and felt content. If we extend the compass metaphor and imagine that on the top of each compass is a tiny video screen which shows the other, and that the position of the needle determines the composition of that video image (not just brightness and contrast, but focus, size, realism, etc), we can begin to get a sense of what is meant by eigenmorphism. It is intended as a common schema to unite quality and quantity, or in Deleuzian-Bergsonian terms, differences in kind and differences in degree.
The full conjecture of eigenmorphism is that differences in kind are orthogonal to differences in degree, but that they are both part of the same cycle of sense which discerns all difference and through that experienced discernment, effects a reproduction of order which is expressed through all phenomena. To be clear, the scope of the juxtaposition of this schema is absolute. We are comparing poetry to baseballs, and deja vu to carbide steel. The goal is to recognize a subtle framework which weaves together all phenomena, whether physical, phenomenal, or semiotic, and on the scale of the microcosmic, macrocosmic, or cosmic. Eigenmorphism can help organize, in one conceptual framework, the relation between micro and macro scales or across physical and phenomenal lines, where similarities may be found only in the extremity of their incommensurable difference. Eigenmorphism is a response to Einstein’s famous quote “God does not play dice with the universe” in disbelief of Quantum Mechanical probability, God plays dice, dice plays God but only sense can make a difference.
Eigenlinguistics
Language provides a good example of how physical objects and experience can coexist seamlessly within a single schema. Onomatopoetic words like bang! or pow! rely on high degree of isomorphism between 1) sounds that we hear, 2) sensible generalizations of those sounds, and 3) sounds that can be spoken. Such sound-alike words are more universal than other kinds of words, as they require no translation from language to language. People of all ages and backgrounds intuitively understand that these words refer directly to events associated with those sounds. It seems likely that language itself must have originated with this kind of imitative behavior – the recording and replaying of sounds and gestures. The combination of literal imitation (bottom-up) and figurative association (top-down) yielded more abstract metaphors with more eigenmorphic combinations. Complex communications extended the step of sensible generalization (2) and, perhaps surprisingly, made communication and representation more difficult to separate from that which is represented.
With each extension from the literal to the figurative, the poetic and the abstract, we effect certain translations, each of which stand on their own as sensible connections, and which take their own most sensible places in the universal context. Again, going back to the color wheel. Each hue and shade makes sense as its own unique individual experience, and as a mathematical vector within any number of sensible topologies (wheels, cylinders, cubes, parabolas, triangles, etc). It makes sense in many different ways, including the intuitively idiopathic sense of its palette genius.
Within language we find a vast context of meanings which have developed accidentally and intentionally, intuitively and counter-intuitively. Conventions of grammar and spelling reflect similar mixtures of logic, intuition, spontaneity, and inherited formalism. Beneath all of these is a semiotic foundation. To communicate is to represent, and to represent is to infer comparisons among subjects, objects, other subjects, and other comparisons. To discern differences between ‘things’ implies first a capacity to sense ‘things’, and to experience sensibility itself – an expectation of presence and participation.
Semiosis is a particular cognitive version of what I suggest is this fundamental sensibility; the capacity to mentally record and generalize or iconicize perceptions, and to record those essentialized perceptions to be abstracted further. If information is a perceived difference that makes a difference, then information itself depends on a more primitive capacity to discern difference from indifference, to care about that discernment, and the power to do something about it. The polarization of afferent sensory receptivity is the power of efferent motive projection; to participate intentionally in some way which promises to have an effect on what has been sensed. This, to me, is the ultimate firmament of all metaphysics. The universe is an eigenmorphic-relativistic singularity of all experience, and experience is a nested multiplicity of sensory-motives.
Eigenmorphism assumes this universal continuum of sense in which the degrees and kinds of nestings deform the context of perception itself. As mass deforms spacetime in General Relativity, experiential qualities warp experiential perspectives under Multisense Realism. Rather than assuming a one-to-one, isomorphic relation, in which, for instance, a particular neurotransmitter’s binding in the brain equals a particular particle of a subjective experience, there is a lattice of translation which shifts in direct proportion to the scale and nature of the pairing (micro to macro, private to public, familiar to foreign, etc). As macro scale entities, our human scope is dictated not only by size and frequency of our conscious frame-rate relative to other experiential entities, but by the character and history of our intentional participation. If we want to put a Buddhist twist on it, it could be said that karma is the gravity of consciousness, and eigenmorphism is the warping of consciousness that mirrors back its own warped condition as well as the phenomenal translation of all external conditions through that lensing.
From Relativity to Improbability
As Relativity uses the concept of inertial frames, eigenmorphism describes the holistic constitution of experience. As a prism can be seen to split a beam of white light or combine beams of colored light into one, our human experience unites the spectrum of zoological, biological, chemical, and physical experience. We can choose to see ourselves as animals, or meta-animals, or temporarily embarrassed deities. Our personal experience is proprietary and unique not just to the fingerprint or genome, but to the irreducibly absolute. The primordial pansensitivity hypothesis predicts that every experience, while seemingly composed of reducible, recombinant elements, is actually its own solitary universe – a vector of sense which cannot be reproduced completely. It is proposed that appearances of generality and duplication are a local effect, an artifact of eigenmorphic translation from plurality to singlularity in which the discernment of differences is necessarily truncated at an appropriate level. We can see this, for example, in how we look at sand on the beach, and generalize the grains of sand in our mind. Under a microscope, we can see more of the unique character of each grain. Because we assume that sense is primordial, we can predict that the microscope too has its limit, beyond which discernment falls to zero and that which we are measuring becomes indiscernible from the instrument being used to measure.
The pansensitive conjecture equalizes unlikelihood and inevitability in the totality, since it is not within the entropic displacement of spacetime. Within spacetime, probability is a mechanistic absolute, but that conditionality is, under eigenmorphism, a local inversion of the larger conditions of non-probability. Like the yellow and the cyan, the expectation of predictable order is itself emergent from utter unpredictability. Probability is a palette genius. Because the assumption of the improbable is taken as an anthropic necessity of all possible universes, the unlikelihood of life in the universe is nullified. It is not certain physical conditions which give rise to life, it is life experience which is expressed through certain physical conditions. By analogy, Shakespeare did not arise from the combination of certain words, vast groups of words were employed by Shakespeare to express human stories.
Bodies and Experience, Scale and Frequencies
As a rule of thumb, the closer the scale of forms, the greater the range of possible eigenmorphic relations. Bodies which are of similar size and private experiences which share similar histories and qualities have more potential kinds of relations and more degrees of relation than phenomena of disparate scale or history. From our perspective, it appears that entities which are on the extreme range of scale in the universe like subatomic particles and galactic superclusters seem equally unlikely to host any kind of awareness. It could be that this is objectively** true, even beyond the prejudiced relativity of our human scale eigenmorphism, but it is not clear that there can ever be a difference between human truth and objective truth as long as we are human. For us, even if stars are bits of the Gods as ancient astronomers imagined, their experience is on such a remote scale to ours that our phenomenal states are inaccessible to each other.
More likely it seems that the great and infinitesimal entities do have objectively limited palettes compared with our own, but that those limitations are exaggerated because the eigenmorphic range of our own extended human sensitivity projects its own envelope of significance. Try as we might, the significance of an ant’s life is not on par with a human life, and even if ants looked like human beings, their tiny size relative to our body would make it hard for us to take them seriously. This is all part of the natural intuitive ordering by scale in the universe. Eigenmorphism describes the character of that ordering.
There are many fanciful ways to imagine microphenomenal or astrophenomenal states. Maybe all such entities are one collective experience just as ours is a collective experience of neurons, maybe there is only one proton-star experience and only appears to replay within the stories of younger, more mid-sized entities. It’s just as likely that microphenomenal states are unknowable, alien, and not worth thinking about.
The Pathetic Constant and Pathetic Fallacy
The degree to which we feel that another entity is capable of feeling could be called its pathetic constant, as it remains constant according to form/scale. The more familiar something is to us, the more we ‘like it’ and it is ‘like’ us, the higher the level of empathy we can sustain for it. The pathetic constant which we have to ourselves, ironically may not be as high as that which we reserve for those we admire. That kind of super-significance is a whole other story, but for the purposes of this consideration, it can be said that the pathetic constant toward the idealized self would be the maximum. While bigotry may allow some humans to feel that other humans are less worthy than other members of human society, this prejudice manifests as hatred and fear rather than a low pathetic constant. A true low pathetic constant would be associated with impersonal insignificance rather than personal malice.
Human history points to instances in which certain animals or objects or bodies of dead nobility were revered with high pathetic values, but human societies in general tend to support a general pecking order of pathetic values which place humans before most animals, most animals before most insects, most insects before mold, and mold before minerals. We seem to have an idea about what is ‘like us’ which is relatively free of cultural variation, even if we choose to intentionally prefer one entity into a higher caste.
This may seem a trivial observation, or that this folk hierarchy is derived from mechanical measures of complexity and familiarity, and on one level that might be true, however it becomes necessary, when considering the sentience of technologies like artificial intelligence, to have a place to start. The pathetic fallacy is one where human experiential qualities are attributed to an inanimate object or machine, i.e. ‘the camera loves you’. Even the most ardent supporter of Strong AI must admit that at some level, say, the level of a trash can lid which flaps down to “say THANK YOU” every time a tray is removed, there is a gap between the appearance of the behavior to a human audience and the subjective intent behind that behavior. We can understand that the trash can lid is in fact motivated by tension of physical materials, not politeness. Since we assign to the trash can a pathetic constant which is absolutely minimal, we do not read into its behavior personally, and the eigenmorphic relation between any proposed internal state of the trash can and the polite behavior we might interpret is null; any attribution of meaning between what we experience and what the trash can experiences is purely one-sided and non-coincidental, or else super-signified as part of a manic or psychotic episode.
Should the polite words ‘Thank you!’ come from a human being instead, there is a much more rich field of eigenmorphic mappings to use for interpretation. The meaning of the exchange can range from the trivial and impersonal, as in the case of a consumer transaction with a public-facing employee of a corporation. or it could be heartfelt and genuine, even life changing under some circumstance. The aperture of possibilities is open where the pathetic constant is maximized, as it is those who you most resemble or would like to resemble can hurt you or heal you most.
Artificial Intelligence
In the case of the trash can lid, the intent is for the exchange not to be examined very deeply. For the operators of such restaurants, superficial gestures of politeness support an impression of ‘good service’, particularly in a mechanized and personally impoverished environment of a fast food outlet which some might find unpleasant if the impersonality of the operation were fully disclosed. This ‘polite face’ is functionally similar to the GUI which modern computer systems employ, which dresses up a command line interface which the general public may find difficult. Of course, even the command lines are a polite face superimposed on the more mechanistic levels of hexadecimal or binary code, and finally microelectronic switch configurations. The phrase user-friendly refers to enhancements which are intended to increase the pathetic constant in public facing systems, promoting psychological ease, as well as more intuitive functionality. It’s interesting to note the role that scale and frequency plays in this. A cell phone would not be very user-friendly if you could on use it the same distance away that your computer screen sits from your face.
The binary code is roughly isomorphic to microelectronic switch configurations, or any Turing machine’s configurations. So much so, that there is a branch of computing devoted to developing programs from languages based on bit geometry rather than conventional number representations. The future of nanocomputing or quantum computing may use code that looks very much like what it is and what it does. For now though, the relation between digital bits, and between binary 1’s and 0’s remains slightly more abstracted than the nearly absolute isomorphism of embodied computation. It is important to realize that on the microelectronic level, where the pathetic constant approaches zero for most of us, our commands and programs are not understood by the electronics (or gears or punch cards). Like vast collections of trash can lids, the physical components of any machine are involuntarily moved and changed intentionally, by us, from the outside. The hammer hits the bell, the bell jiggles the float, and so on in Rube Goldberg fashions of mechanical interaction among objects in space.
What is the difference between outside-in and inside-out interaction? Some have argued nothing. Philosophical arguments from Leibniz to Searle notwithstanding, the appearance of the brain as a physical machine is so persuasive and complete that for many, the prospect of empathy emerging from mechanical complexity alone seems to be the only possibility, and a possibility which, from their perspective, is undeniable. We see neurons firing, and it reminds us of a computer. We see software running and it reminds us of our mental experience. Case closed. The pathetic constant is pushed to the maximum, organic and inorganic process become identical, and all eigenmorphism is collapsed to isomorphism…the trash can lid becomes, at least to some small degree, polite. This is the kind of panpsychism that we should avoid. Not only for the sake of poor computer scientists who would automatically become guilty of atrocities in developing experimental beings, and not for the sake of human supremacy, but for the sake of understanding the whole truth about presentation and representation.
Conclusion
Eigenmorphism is a difficult to conceive of properly without fully comprehending the implications of panpsychism, pansensitivity, primordial identity, and perceptual relativity. The idea that both General Relativity and Quantum Mechanics both expose opposite poles of what is ultimately identical with ordinary perception can be used as a basis for modeling a translation lattice. This envelope or matrix of perception which unites the microcosmic and the astrophysical, acts as a lens through which all subjective and objective appearances are presented. Like the eigenstates of QM, the relation of experience to itself has selective positions, settled inertial frames which evolve and recapitulate their own evolution. Participatory sense provides a richer context than mathematical spaces, so that forms and functions are only the publicly measurable tip of an immeasurable iceberg of private appreciation and participation which is unbounded by spacetime.
*that there are no proto-phenomenal or non-experienced properties possible, since ontology itself is treated as supervening on sense.
**true objectivity would require that we discard eigenmorphism, as only the absolute frame of reference would be without relativistic distortion, but objectivity requires that some sensory translation is objectifying another sense experience. Having no other sense experience to translate, the absolute frame can only diagonalize its own diffraction within itself.
Determinism: Tricks of the Trade
The objection that the terms ‘consciousness’ or ‘free will’ are used in too many different ways to be understandable is one of the most common arguments that I run into. I agree that it is a superficially valid objection, but on deeper consideration, it should be clear that it is a specious and ideologically driven detour.
The term free will is not as precise as a more scientific term might be (I tend to use motive, efferent participation, or private intention), but it isn’t nearly the problem that it is made to be in a debate. Any eight year old knows well enough what free will refers to. Nobody on Earth can fail to understand the difference between doing something by accident and intentionally, or between enslavement and freedom. The claim that these concepts are somehow esoteric doesn’t wash, unless you already have an expectation of a kind of verbal-logical supremacy in which nothing is allowed to exist until we can agree on a precise set of terms which give it existence. I think that this expectation is not a neutral or innocuous position, but actually contaminates the debate over free will, stacking the deck unintentionally in favor of the determinism.
It’s subtle, but ontologically, it is a bit like letting a burglar talk you into opening up the door to the house for them since breaking a window would only make a mess for you to clean up. Because the argument for hard determinism begins with an assumption that impartiality and objectivity are inherently desirable in all things, it asks that you put your king in check from the start. The argument doubles down on this leverage with the implication that subjective intuition is notoriously naive and flawed, so that not putting your king in check from the start is framed as a weak position. This is the James Randi kind of double-bind. If you don’t submit to his rules, then you are already guilty of fraud, and part of his rules is that you have no say in what his rules will be.
This is the sleight of hand which is also used by Daniel Dennett as well. What poses as a fair consideration of hard determinism is actually a stealth maneuver to create determinism – to demand that the subject submit to the forced disbelief system and become complicit in undermining their own authority. The irony is that it is only through a personal/social, political attack on subjectivity that the false perspective of objectivity can be introduced. It is accepted only by presentation pf an argument of personal insignificance so that the subject is shamed and bullied into imagining itself an object. Without knowing it, one person’s will has been voluntarily overpowered and confounded by another person’s free will into accepting that this state of affairs is not really happening. In presenting free will and consciousness as a kind of stage magic, the materialist magician performs a meta-magic trick on the audience.
Some questions for determinist thinkers:
- Can we effectively doubt that we have free will?
Or is the doubt a mental abstraction which denies the very capacity for intentional reasoning upon which the doubt itself is based? - How would an illusion of doubt be justified, either randomly or deterministically? What function would an illusion of doubt serve, even in the most blue-sky hypothetical way?
- Why wouldn’t determinism itself be just as much of an illusion as free will or doubt under determinism?
Another common derailment is to conflate the position of recognizing the phenomenon of subjectivity as authentic with religious faith, naive realism, or soft-headed sentimentality. This also is ironic, as it is an attack on the ego of the subject, not on the legitimacy of the issue. There is no reason to presume any theistic belief is implied just because determinism can be challenged at its root rather than on technicalities.
Recent Comments