Archive
Analogue, Brain Simulation Thread
Tell the difference between a set of algorithm’s in code that can mimic all the known processes for the input and output of a guitar into analogue equipment. The answer is no, because pros cant tell the difference. The entire analogue process has been sufficiently well modeled and encapsulated in the algorithmns. The inputs and outputs are physically realistic where the input and output are important. That is what substrate modelling of brain processes in computational neuroscience is about. i.e. Brain simulations.
Just because our analysis of what is going on in the brain reminds is of information processing does not mean that the brain is only an information processor, or that consciousness is conjured into existence as a kind of information-theoretic exhaust from the manipulation of bits.
What you are not considering is that beneath any mechanical or theoretical process (which is all that computation is as far as we know) is an intrinsic sensible-physical context which allows switches to load, store, and compare – allows recursive enumeration, digital identities,…a whole slew of rules about how generic functions work. This is already a low level kind of consciousness. That could still support Strong AI in theory, because bits being the tips of an iceberg of arithmetic awareness would make it natural to presume that low level awareness scales up neatly to high level awareness.
In practice, however, this does not have to be the case, and in fact what we see thus far is the opposite. The universally impersonal and uncanny nature of all artificial systems suggests the complete lack of personal presence. Regardless of how sophisticated the simulation, all imitations have some level at which some detector cannot be fooled. Consciousness itself however, like the wetness of water, cannot be fooled. No doll, puppet, or machine which is constructed from the outside in has any claim on sentience at the level which we have projected onto it. This is not about a substitution level, it is about the specific nature of sense being grounded in the unprecedented, genuine, simple, proprietary, and absolute rather than the opposite (probabilistic, reproducible, complex, generic, and local). From the low level to a high is not a difference in degree, but a difference in kind, even though the difference between the high level and low level is a difference in degree.
What I mean by that is that anything can be counted, but numbers cannot be reconstructed into what has been counted. I count my fingers…1, 2, 3, 4, 5. We have now destructively compressed the “information” of my hand, each unique finger and the thumb, into a figure. Five. Five can apply generically to anything, so we cannot imagine that five contains the recipe for fingers. This is obviously a reductio ad absurdum, but I introduce it not as a straw man but as a clear, simple illustration of the difference between sensory-motive realism and information-theoretic abstractions. You can map a territory, but you can’t make a territory out of a map regardless of how much the map reminds you of the territory.
So yes, digital representations can seem exactly like analog representations to us, but they are both representations within a sensory context rather than a sensory-motive presentation of their own. All forms of representation exist to communicate across space and time, bridging or eliding the entropic gaps in direct experience. It’s not a bad thing that modeling a brain will not result in a human consciousness, its a great thing. If it were not, it would be criminal to subject living beings to the horrors of being developed and enslaved in a lab. Fortunately, by modeling these beautiful 4-D dynamic sculptures of the recordings of our consciousness, we can tap into something very new and different from ourselves, but without being a threat to us (unless we take it for granted that they have true understanding, then we’re screwed).
Logical Positivism and White Light
“Can you link? Perhaps quickly explain?”
Sure. This post relates most directly to transcending Logical Positivism,
Wittgenstein in Wonderland, Einstein under Glass
there are a lot of pages and posts on the site that refer back to how physical and metaphysical assumptions can both be transcended.
Briefly, what I suggest is that rather than assuming physical and mental isolation as objectively true, we should assume the opposite and see isolation as a localization of totality, in the same way that ‘green’ is a localization of white, and white is really transparency or visual sensitivity itself which is too bright for us to see through*.
The universe, consciousness, physics, mathematics, are all understandable as parts of the whole through triangulation of symmetric relations. Rather than spurning the thin air of the metaphysical or the mess of the anthropological, we should understand that their lack of sterile certainty reflects our own proximity to it, and that certainty itself is a function of distance – an illusion of monolithic realism to play against a reality of layered fiction. Physics is not realism, but the capacity to modulate realistic fiction against itself. Physics is participatory sense, and sense has understandable features which cut across all layers and scales of experience.
*This thought deserves to be developed in more depth. What is the color white? We know from basic science that white is a kind of jumble of all of the wavelengths of visible light. If we think about how we encounter white light in nature, however, it is often as a reflection in something transparent or shiny like water or glass. If you have ever tried to paint water, you know that it is about carefully placed contrasts of bright/white and dark paint.

Likewise, the brilliance of a white diamond is a reflection of its high refractive index – its just soo transparent from so many different angles than your eye can’t handle it. Given some level of ambient illumination, the visual sense is opened up beyond the human spec, and there’s too much to see through. It’s meta-transparent. As with all media, when the spec limit is exceeded, the guts of the medium itself begins to be exposed. What happens when there’s too much data on your internet connection? Freezing, pixelation. The digital substrate is exposed. Same thing with lens flares, records skipping, static on the radio, etc. The fabric which is carrying the message bleeds into the message. Light is the same way – too much potential clarity is blinding. Too much positivity and logic obscures the reality of the consciousness which creates it.
Chess, Media, and Art
I was listening to Brian Regan’s comedy bit about chess, and how a checkmate is such an unsatisfying ending compared to other games and sports. This is interesting from the standpoint of the insufficiency of information to account for all of reality. Because chess is a game that is entirely defined by logical rules, the ending is a mathematical certainty, given a certain number of moves. That number of moves depends on the computational resources which can be brought to bear on the game, so that a sufficiently powerful calculator will always beat a human player, since human computation is slower and buggier than semiconductors. The large-but-finite number of moves and games* will be parsed much more rapidly and thoroughly by a computer than a person could.
This deterministic structure is very different (as Brian Regan points out) from something like football, where the satisfaction of game play is derived explicitly from the consummation of the play. It is not enough to be able to claim that statistically an opponent’s win is impossible, because in reality statistics are only theoretical. A game played in reality rather than in theory depends on things like the weather and can require a referee. Computers are great at games which depend only on information, but have no sense of satisfaction in aesthetic realism.
In contrast to mechanical determinism, the appearance of clichés presents a softer kind of determinism. Even though there are countless ways that a fictional story could end, the tropes of storytelling provide a feedback loop between audiences and authors which can be as deterministic -in theory- as the literal determinism of chess. By switching the orientation from digital/binary rules to metaphorical/ideal themes, it is the determinism itself which becomes probabilistic. The penalty of making a movie which deviates too far from the expectations of the audience is that it will not be well received by enough people to make it worth producing. Indeed, most of what is produced in film, TV, and even gaming is little more than a skeleton of clichés dressed up in more clichés.
The pull of the cliché is a kind of moral gravity – a social conditioning in which normative thoughts and feelings are reinforced and rewarded. Art and life do not reflect each other so much as they reflect a common sense of shared reassurance in the face of uncertainty. Fine art plays with breaking boundaries, but playfully – it pretends to confront the status quo, but it does so within a culturally sanctioned space. I think that satire is tolerated in Western-objective society because of its departure from the subjective (“Eastern”) worldview, in which meaning and matter are not clearly divided. Satire is seen as both not as threatening to the material-commercial machine, which does not depend on human sentiments to run, and also the controversy that satire produces can be used to drive consumer demands. Something like The Simpsons can be both a genuinely subversive comedy, as well as a fully merchandized, commercial meme-generating partner of FOX.
What lies between the literally closed world of logical rules and the figuratively open world of surreal ideals is what I would call reality. The games that are played in fact rather than just in theory, which share timeless themes but also embody a specific theme of their own are the true source of physical sustenance. Reality emerges from the center out, and from the peripheries in.
*“A guesstimate is that the maximum logical possible positions are somewhere in the region of +-140,100,033, including trans-positional positions, giving the approximation of 4,670,033 maximum logical possible games”
Questioning the Sufficiency of Information
Searle’s “Chinese Room” thought experiment tends to be despised by strong AI enthusiasts, who seem to take issue with Searle personally because of it. Accusing both the allegory and the author of being stupid, the Systems Reply is the one offered most often. The man in the room may not understand Chinese, but surely the whole system, including book of translation, must be considered to understand Chinese.
Here then is simpler and more familiar example of how computation can differ from natural understanding which is not susceptible to any mereological Systems argument.
If any of you have use passwords which are based on a pattern of keystrokes rather than the letters on the keys, you know that you can enter your password every day without ever knowing what it is you are typing (something with a #r5f^ in it…?).
I think this is a good analogy for machine intelligence. By storing and copying procedures, a pseudo-semantic analysis can be performed, but it is an instrumental logic that has no way to access the letters of the ‘human keyboard’. The universal machine’s keyboard is blank and consists only of theoretical x,y coordinates where keys would be. No matter how good or sophisticated the machine is, it will still have no way to understand what the particular keystrokes “mean” to a person, only how they fit in with whatever set of fixed possibilities has been defined.
Taking the analogy further, the human keyboard only applies to public communication. Privately, we have no keys to strike, and entire paragraphs or books can be represented by a single thought. Unlike computers, we do not have to build our ideas up from syntactic digits. Instead the public-facing computation follows from the experienced sense of what is to be communicated in general, from the top down, and the inside out.
How large does a digital circle have to be before the circumference seems like a straight line?
Digital information has no scale or sense of relation. Code is code. Any rendering of that code into a visual experience of lines and curves is a question of graphic formatting and human optical interaction. With a universe that assumes information as fundamental, the proximity-dependent flatness or roundness of the Earth would have to be defined programmatically. Otherwise, it is simply “the case” that a person is standing on the round surface of the round Earth. Proximity is simply a value with no inherent geometric relevance.
When we resize a circle in Photoshop, for instance, the program is not transforming a real shape, it is erasing the old digital circle and creating a new, unrelated digital circle. Like a cartoon, the relation between the before and after, between one frame and the “next” is within our own interpretation, not within the information.
Playing Cards With Qualia
Here is an example to help illustrate what I think is the relationship between information and qualia that makes the most sense.

Here I am using the delta (Δ) to denote “difference”, n to mean “numbers” or information, kappa for aesthetic “kind” or qualia, and delta n degree (Δn°) for “difference in degree”.
The formula on top means “The difference between numbers and aesthetic qualities is not a difference in degree. This means that there is no known method by which a functional output of a computation can acquire an aesthetic quality, such as a color, flavor, or feeling.
Reversing the order in the bottom formula, I am asserting that the difference between qualia and numbers actually is only a difference in degree, not a difference in kind. That means that we can make numbers out of qualia, by counting them, but numbers can’t make qualia no matter what we do with them. This is to say also that subjects can reduce each other to objects, but objects cannot become subjects.
Let’s use playing cards as an example.
Each card has a quantitative value, A-K. The four suits, their colors and shapes, the portraits on the royal cards…none of them add anything at all to the functionality of the game. Every card game ever conceived can be played just as well with only four sets of 13 number values.
The view which is generally offered by scientific or mathematical accounts, would be that the nature of hearts, clubs, diamonds, kings, etc can differ only in degree from the numbers, and not in kind. Our thinking about the nature of consciousness puts the brain ahead of subjective experience, so that all feelings and qualities of experience are presumed to be representations of more complicated microphysical functions. This is mind-brain identity theory. The mind is the functioning of the brain, so that the pictures and colors on the cards would, by extension, be representations of the purely logical values.
To me, that’s obviously bending over backward to accommodate a prejudice toward the quantitative. The functionalist view prefers to preserve the gap between numbers and suits and fill it with faith, rather than consider the alternative that now seems obvious to me: You can turn the suit qualities into numbers easily – just enumerate them. The four suits can be reduced to 00,01,10, and 11. A King can be #0D, an Ace can be 01, etc. There is no problem with this, and indeed it is the natural way that all counting has developed: The minimalist characterization of things which are actually experienced qualitatively.
The functionalist view requires the opposite transformation, that the existence of hearts and clubs, red and black, is only possible through a hypothetical brute emergence by which computations suddenly appear heart shaped or wearing a crown, because… well because of complexity, or because we can’t prove that it isn’t happening. The logical fallacy being invoked is Affirming the Consequent:
If Bill Gates owns Fort Knox, then he is rich.
Bill Gates is rich.
Therefore, Bill Gates owns Fort Knox.
If the brain is physical, then it can be reduced to a computation.
We are associated with the activity of a brain.
Therefore, we can be reduced to a computation.
To correct this, we should invert our assumption, and look to a model of the universe in which differences in kind can be quantified, but differences in degree cannot be qualified. Qualia reduce to quanta (by degree), but quanta does not enrich to qualia (at all).
To take this to the limit, I would add the players of the card game to the pictures, suits, and colors of the cards, as well as their intention and enthusiasm for winning the game. The qualia of the cards is more “like them” and helps bridge the gap to the quanta of the cards, which is more like the cards themselves – digital units in a spatio-temporal mosaic.
Why Likeness is Not, Like, the Same as Sameness
Why do we like to like the same things, until the thing we liked becomes the same old thing?
Why is there “Good as New” and “Like New”, but not “Same as New”?
I think that the difference between like and same are especially related to consciousness and support the idea of awareness (and therefore attention) as more ‘like’ novelty and ‘like-ness’ than it is ‘the same as’ the integration or processing of information.
Machines are characterized by their ability to do the same thing, over and over. The idea behind digital technology is really to be able to do the exact same thing, over and over and over, forever. Does this kind of behavior wake us up or does it lull us into a stupor? What kinds of things put us to sleep and what kinds of experiences wake us up?
Waking up is not an abstract theory. Waking up instantiates us into the directly and concretely sensed now, into public time. The now and the new are unrepeatable and unique, thus there can be nothing which is ‘the same as’ new without actually being new. When we say that something is ‘the same’ as something else, we are often speaking metaphorically. What we mean is that the difference is not important, and that one thing is functionally equivalent to another.
Anti-Metaphor
Within the world of mathematics, ‘the same’ or “=” is a metaphor for that which is literally identical or interchangeable in all circumstances. Unlike physical reality, the whole of mathematics is a symbolic abstraction – a metaphor for anti-metaphor:
Where metaphors are ‘like’ conceptual rhymes or semantic likeness which cut across the whole of human intuition poetically and aesthetically, mathematical metaphors are aiming for the opposite effect in which meaning is frozen into position, clear, defined, and unambiguous. This is meaning which has been reflected in the looking glass of thermodynamic irreversibility. It is the privatized essence of publicity.
When we look out of ourselves, we see only that which can be decomposed and measured. Feeling is presented as figures, and figuring them out literally gives us a feeling of transcending the ambiguity, fluidity, and obscurity our own subjective awareness.
I see the opportunity that lies before us is to recover the authenticity of awareness without sacrificing the reliability of its substitute. The worldview that is driven by quantitative formula alone cannot locate the now, other than as a promise that it will eventually be found – under a heap of accidents. Accidents and probability are the inverted image of intention and likeness. They are what you get when sameness is assumed to be primitive. The universe is failed sameness and broken symmetry – serial mutation.
To overcome the prejudices inherent in this worldview, an important step is to understand the irony that the intention behind measurement leads to its own perfect illogical fallacy. To count and codify is to try to escape from personal bias and fuzzy ‘likeness’ which is not the ‘exact same thing’ as truth, but what we have found increasingly, is that we cannot be immune from an equally toxic bias toward the impersonal. As much as we want to be ‘certainly in the right’, and to put ‘everything under the sun’ in tune, the enlightenment of the Western mind is eclipsed by its own insensitivity and denial. The more that we seek out the next product or service to make us feel ‘like new’, the faster it becomes the same old crap.
Free Will Isn’t a Predictive Statistical Model
Free will is a program guessing what could happen if resources were spent executing code before having to execute it.
I suggest that Free Will is not merely the feeling of predicting effects, but is the power to dictate effects. It gets complicated because when we introspect on our own introspection, our personal awareness unravels into a hall of sub-personal mirrors. When we ask ourselves ‘why did I eat that pizza’, we can trace back a chain of ‘because…I wanted to. Because I was hungry…Because I saw a pizza on TV…’ and we are tempted to conclude that our own involvement was just to passively rubber stamp a course of multiple-choice actions that were already in motion.
If instead, we look at the entire ensemble of our responses to the influences, from TV image, to the body’s hunger, to the preference for pizza, etc as more of a kaleidoscope gestalt of ‘me’, then we can understand will on a personal level rather than a mechanical level. On the sub-personal level, where there is processing of information in the brain and competing drives in the mind, we, as individuals do not exist. This is the mistake of the neuroscientific experiments thus far. They assume a bottom-up production of consciousness from unconscious microphysical processes, rather than seeing a bi-directional relation between many levels of description and multiple kinds of relation between micro and macro, physical and phenomenal.
My big interest is in how intention causes action
I think that intention is already an action, and in a human being that action takes place on the neurochemical level if we look at it from the outside. For the motive effect of the brain to translate into the motor effect of the rest of the body involves the sub-personal imitation of the personal motive, or you could say the diffraction of the personal motive as it is made increasingly impersonal, slower, larger, and more public-facing (mechanical) process.
Free Will and the Unconscious
The key oversight, in my opinion, in the approach taken by neuroscientific research into free will (Libet et al) is in the presumption that all that is not available to us personally is ‘unconscious’ rather than conscious sub-personally. When we read these words, we are not conscious of their translation from pixels to patches of contrasting optical conditions, to loops and lines, to letters and words. From the perspective of our personal awareness, the words are presented as a priori readable and meaningful. We are not reminded of learning to read in kindergarten and have no feeling for what the gibberish that we are decoding would look like to someone who could not read English. The presentation of our world is materially altered at the sub-personal, but not ‘unconscious’ level. If it were unconscious, then we would be shocked to find that words were made of lines and loops or pixels.
In the same way, a robotic task is quickly anticipated, even 10 seconds ahead of time, without our personality getting involved. This does not mean that it is not ‘us’ making the choice, only that there is no need for such an easy and insignificant choice to be recognized by another layer of ‘us’, and reported by a third layer of ‘us’ to the personal layer of us.
When we work on the sub-personal level of neurons, we are addressing a layer of reality in which we, as persons, do not exist. Because we have not yet factored in perceptual relativity as a defining existential influence, we are making the mistake of treating a human being as if they were made of generic Legos instead of a single unique and unrepeatable living cell which has intentionally reproduced itself a trillion times over – each carrying the potential for intention and self-modifying teleology.
Why an Atom is More Like a Person Than a Doll Is
Another thing that really puzzles me is the way that you agree with me that nothing is inanimate, and yet you repeatedly use arguments that are based on the premise that some things are inanimate. Is this just an *apparent* contradiction because we use the term ‘inanimate’ in fundamentally different ways, or is it a contradiction in your thinking? Could you perhaps explain this?
It makes sense that it would seem contradictory, as this issue is really a more advanced concept that goes beyond accepting the initial premises which we agree on. Lets say that we want to create a whole other Everything from scratch. In my view, as long as we keep things relatively simple, as in no complex organic life, our views are pretty much interchangeable. It doesn’t matter whether information processes are irreducibly animate as you say, or whether information processes are actually the self-diffracted gaps in the primordial identity pansensitivity, as I suggest. The effect is indistinguishable and we have cool stuff going on, with physics, aesthetics, entropy all naturally falling out as parameters.
The question of primordial identity begins to seem more important as multicellular life begins and we have to choose to bet on whether the body of any dividing cell is type identical to the experience associated with the organism as a whole, or whether there are multiple layers of experience going on. If there are multiple layers of awareness going on, does one of the layers act as an umbrella for the others, and if so, is it a summary/identity layer as the color white would be to the visible spectrum of colors, or is it an emergent layer which is produced by transfers of quantitative results, so that the cellular experiences are a priori ‘real’ and the macrophenomenal experiences are generated as a kind of projection which is less than primitively real.
What I do with MSR is to assume that the primary relation is perceptual relativity. This means that spacetime is scaled to the significance of experiences rather than fixed to a scalar index. By this I mean that the cell level microphenomenal experience is simultaneous with the organism level macrophenomenal experience, but that their simultaneity is asymmetric, as the macro appears smeared across time from the micro perspective. When we use microscopic scales to poke around in the body and brain, we are essentially driving a wedge between the macro and micro, but without recognizing that microphysical effects refer only to microphenomenal affects and not macrophenomenal affects.
At the level of the cell or molecule, the organism as a whole, if it is a complex organism, does not exist. Literally. There is no {your name here} to your DNA. Its a completely different level of description in which the public side relates mechanically (molecules must functionally produce cells and be produced by cells), and the private side relates *metaphorically*. It’s a complete divergence which does not appear prominently in pre-biotic phenomena. Each organism is evolving separately on the inside than it is on the outside, and that dimorphism is getting exponentially more pronounced as it evolves. The public body side appears to be physically recapitulating itself as a growing, multiplying, dividing structure in space, while the private experiential side has no appearance and is felt as the invariant nexus of a story about the world which appears to be repeating in nested cycles and progressing in a linear narrative.
The two stories are different. The microphenomenal story appears to relate to physical events, which we can observe in everything from a viral infection to changes in temperature or pressure in the environment. The macrophenomenal story, at least for us, is consumed by history and teleology. We respond to the environment based on our accumulated experience and intention. This so-called mind-body split is actually worse than that. Coming from a time where we had no understanding of microphysics, the simplistic mind-body mapping flattens human awareness into a single horizontal dualism. What I suggest is that dualism is actually an orthogonal monism, but that each horizontal dualism is part of a vertical stack. The cell that is seen by the organism in the organisms world is only a snapshot that it can see during one if its moments. To look at one of your blood cells under a microscope is for the cell to see itself from two different evolutionary times, with the newer, larger experience looking at a moment of the older, smaller experience and seeing it from the outside, as an object or machine. This is how the aesthetics of distance works for us – when we outgrow an experience, the here and now associated with us is recontextualized aesthetically as a there and then which is associated with “it”.
I don’t know if that makes it seem even more confusing, but what I am trying to get at is that the more the universe recapitulates itself as increasingly nested experiences, the more important it is that we see that which is nesting itself as primary and the overall nest as ‘inanimate’. Pragmatically, we can’t walk around the house worried about how the carpet fibers feel, or whether we have underestimated the feelings of the avatar we have created in a computer game. If it is the nesting instead which is primary rather than what is being nested, then we have no justification at all for our intuitions about life and death or organic vs artificial processes and we can only turn to a kind of gradient of probable intelligence based on complexity.
There are a lot of problems with that, not the least of which is that we are required to take the word of any sufficiently sophisticated machine over our own understanding. We become unable to justify any significant difference between an interactive cartoon character that acts like a person, and a fellow human being. A successful stock market trading program would be entitled to staff companies entirely with copies of itself and reduce the entire human population to an unemployed resource liability. I’m just throwing out a few wild examples, but there are many less extreme but undesirable consequences to personifying information processes, as we are starting to see with the rise of corporate personhood in the US. A corporation is an information process, as is a city, but we have to decide whether the employees and citizens ultimately serve the motives of the process or whether the processes are to extend from their motives. If process is primary, then we are mere spectators to the process of our own irrelevance. If sense and motive are primary, then the process is ours to do with it as we wish. Nothing short of the future of the universe hangs in the balance. It is more convenient to work with measurable processes and theories than messy emotions and sensations, yet the universe has found a way to do that, and I think so should we.
If we think of the world that we see through our eyes as an experience in the moment rather than the whole truth of existence, it is no longer a given that configurations and complexity are creators of life. The cellular machinery only relates to extra-cellular machinery on far micro and far macro levels of description. The most dynamic range is the fertile middle. Humans have, as far as we know, the broadest range between the mechanistic ‘out there’ and animistic ‘in here’. This is what makes us human. Any theory which does not clearly understand why that is important is not a complete theory, and is therefore ultimately a theory of the destruction of humanity. I’m not a huge fan of humanity myself, so I say this not as some Cassandra-esque wolf crying, but as a consequence of what seems to be the case when I add up everything to get a big picture. Information cannot feel. These words are not generic patterns produced by inevitable process alone. They are my words, and I am instantiating them directly on my own irreducibly macrophenomenal level.


Recent Comments