Home > consciousness, philosophy, physics > Morphoria and The Deconstruction of Arithmetic

Morphoria and The Deconstruction of Arithmetic

Biology Similarity in form, as in organisms of different ancestry.
2. Mathematics A one-to-one correspondence between the elements of two sets such that the result of an operation on elements of one set corresponds to the result of the analogous operation on their images in the other set.
3. A close similarity in the crystalline structure of two or more substances of similar chemical composition.

Homology, Analogy

“Homology, then, is the relation between abstract objects (descriptions, or representations of real world objects) where the formal description allows a mapping function between them.”

“Analogous relations are still a kind of isomorphism, but the mapping is not between sets of objects, but between the form of the objects themselves”

“If I discover that Raptor X has enzyme E, then I can infer that all other members of the Raptor group have E as well! That’s an enormous amount of inductive warrant. Interestingly, if I tell you that a raptor is a predator, you cannot infer that all raptors are (some are scavengers). Homology does not license analogical claims. But it may bracket them, as I will later argue. We can summarize the difference here by saying that classifications by homology are inductively projectible, while classifications by analogy are deductive only. Moreover, analogies are generally model-based. The choice of what properties to represent usually depend upon some set of “pertinent” properties, and this is not derived from an ignorance of what matters, or some unobtainable theory-neutrality. In order to measure similarity, you need to know what counts.”

Anamorphic drawings are distorted pictures requiring the viewer to use a special, often reflective device to reconstitute the image.

-phore: Bearer or carrier of.

Semaphore: Borrowed in 1816 from French sémaphore, coined in French from Ancient Greek σῆμα (sêma, “sign”), and -φωρος (-phoros), from φέρω (férō, “to bear, carry”

Euphoria: An excited state of joy, a good feeling, a state of intense happiness.


  1. A figure of speech in which a word or phrase is applied to an object or action to which it is not literally applicable.
  2. A thing regarded as representative or symbolic of something else, esp. something abstract.

anaphor: 1. a word (such as a pronoun) used to avoid repetition; the referent of an anaphor is determined by its antecedent. 2. An expression refering to another expression. In stricter uses, an expression referring to something earlier in the discourse or, even more strictly, only reflexive and reciprocal pronouns.

The difference between morphic (shape) and phoric (carrier) is a good way of grabbing on to the essential difference between public and private phenomena. Metaphors or other ‘figures of speech’ can have a wide gap between one reference and another. Similarity, as the blog quoted above mentions, is notoriously hard to pin down. The author mentions “Hamming Distance, Edge Number and Tversky Similarity” but these approaches to defining similarity rely on lower level methods of pattern recognition. In all cases, similarity requires some capacity to detect, compare, discern criteria from the comparison, and cause an action which can be detected publicly. In other words, similarity requires that something can generate sensory-motive participation.

The infocentric definitions of similarity do not adequately address the private sense of similarity at all, rather they confine the consequences of theoretical object interactions to demonstrable sets of criteria. Edge number does not apply to something such as whether a feeling is like another feeling or not. To me all of this really highlights the bias of the Western approach of reducing science to the taxonomy of rigid public bodies. The experience even of fluid, gaseous, or plasma phenomena are virtually inaccessible unless reduced to a microcosmic level at which only rigid object properties are considered and all smoothness is abstracted into ’emergent properties’.

This relates back to the analog and mathematics. If we look at a basic arithmetic operation like 4 x 5 = 20, and get beneath the surface, I think what we might find something like “If four things were just like one thing, then five of those four-ish things would be just like ten two-ish things, or two ten-ish things, which would be one twenty-ish thing”. In a previous post, I got into ratio as the root of reason, and how the radius can be adjusted between tighter and looser rationality. How figurative do you want to get is the question which determines how universal, public, and ‘Western-scientific’ you want to get. Quantitative analysis of course, is the ultimate rationalization, the ultimate reduction of ‘just like’ to ‘exactly the same as’. The ‘=’ condenses all subjectivity into a single semaphore, a flag which refers to precise equivalence, or absolute universal similarity. Similar in all ways that count. What counts is presumed in advance by the Western mind, to be reliable public function, aka objective realism.

Numbers therefore, could be called isophors. Carriers of one to one correspondence. The caliper of the drafting compass of reason is set to minimum tolerance – to the point of points; digital-Boolean logic. At this point, even geometry is superfluous, and computation is revealed to be a spaceless function of connection and disconnection. Whether electronic or mechanical, computers are based on contact and detection first and foremost. Contact (a connection of ranged position) influences physical change (thermodynamic disposition). Despite the assumptions of both AI and QM, I am confident that this is a property of matter, not of vacuum space or ‘information’.

Moving from numbers to geometry and algebra is a matter of making isophors into meta-isophors. We understand that just as a twenty is four fives or five fours, or two tens, so does any number enumerating itself twice is a square, and a squared hypotenuse relates to the square of the sides of a triangle as the squared radius of a circle times pi is its area. The transfer of integers into variables is a squaring of the isomorphic function, making all of mathematics a psychological model of modeling itself. It is no surprise then, that at this point of maximum Western influence over science, we should mistake our own concretely physical experiences of conscious life for a ‘model’ and the world in which we live for a Matrix.

Of course, when all of the logical analysis and quantitative analogs are stripped away, it is only the morphic, the phoric, and the sense to tell the difference which are genuinely real. Logic is not the isophor of sense however, or even a homophor. I would say that maybe logic is the public facing hemisphor of sense. It is the half of sense which can be generalized externally.

  1. phiguy110
    January 12, 2013 at 8:58 pm

    Um…who are you? This is the most incredible metaphysics I’ve ever read. I mean, this is IT. The entire edifice of “the world,” subjective and objective, reducing to a single abstract yet understandable THING. It’s the dream of all philosophy. Bravo.

    • January 13, 2013 at 2:28 am

      Wow, thank you, I really appreciate that! Most people do not seem to feel the same way, or at least those who are vocal about it. Are you sure that you aren’t a voice in my head? 🙂

  2. phiguy110
    January 13, 2013 at 1:17 pm

    Let me ask you a question: where does information theory fit into your system as a paradigm of explanation? Should it be abdoned as too reductive, or can we use its “fundamental” quality of description to illuminate better those aspects of reality that ultimately can’t be reduced.

    You bring up Tononi’s IIT in another post but don’t address the theory’s major philosophic point (or one of them at least): that a conscious experience is always defined by what it rules out; “underneath” every experience is a set of experiences that were possible but not actual. Is there room in multisense realism for this kind of “fundamental” informational understanding of experience? Is experience somehow defined by potentialities?

    • January 13, 2013 at 3:29 pm

      Yes, I think that understanding and developing information theory further should continue to be an important priority. I would suggest only that if we are going to apply information theoretical models to consciousness, we should keep in mind that it is only a model and that what we are really ultimately talking about is sensory-motor experience. Artificial Intelligence, for example, would benefit by being released from the responsibility of trying to bring systems or programs to life, so that it can be directed toward it’s legitimate goal of emulating intelligence as a service to human purposes (human sense and motive, not arithmetic logic).

      As far as experience being defined by what it rules out, yes that is an important issue. I think that Tononi’s theory breaks new ground in the right direction, which I think will eventually lead to what I have called Significance, or Perceptual Inertial Frames, or (for added pretentious precision) apocatastatic gestalt trans-rational algebra. Fancy words for trying to describe the counter-intuitive reversal of sense-making: How we see an image because our visual experience screens out the granularity of the pixels. We ‘overlook’ them and see the coherent image ‘through’ the pixels rather than an an abstract ‘pattern’ of pixels which we could (if we were computers) signify perfectly well without any embodied presentation layer.

      So yes, it’s a big deal that sense works from the inside out compared to logic. Wholes through holes instead of wholes made of parts (which aren’t wholes at all, but assemblies or machines). This gets into my whole Sole Entropy Well interpretation of the Big Bang, since in order to have access to these experiential wholes, sense itself has to, on some level or one level, be a unified whole or monad. This means we have a universe from Everything, subdivided by or diffracted through differently scaled signal attenuations. Spacetime is the gap which acts as a resistor to the singularity underlying all experiences of time and place. It sounds mystical and Eastern, and maybe it is, but I see it as no more wild than vacuum flux and MWI. Besides, even though sense is something which seems exotic to think about scientifically as a universal element, it has the advantage of being verifiable to us locally. We have to use sense to make sense of the idea, so we can’t very well deny that it exists (or really ‘insists’).

      Potentialities then, would be what any given experience contains as far as its relation to the whole, and to greater, parallel and lesser wholes. It’s all present implicitly (as Bohm might have said) or morphically (as Sheldrake might say), like the Net of Indra. My contribution is to say that it is not present as an energy or resonance or information, rather it is what I call solitropic. Solid, static as in under-STAND-ing. The absolute primordial orientation. We would think of that as an unfathomable mess to contain all of history from all perspectives, but there is no containment necessary because there is nowhere else for it to be. The erasure of memory and the disappearance of the past is a function of what we are as biological organisms, not what the universe is.

  3. phiguy110
    January 13, 2013 at 11:39 pm

    “apocatastatic gestalt trans-rational algebra.” YES! How I love to pretend I understand that. Sir, I don’t know if this is metaphysics or 21st century poetry but…I hope you have as much fun writing it as I do reading it!

    All great comments above, I just think that humanity is going to have to go through 2 or 3 conceptual bottlenecks before we can even evaluate a lot of the claims of multisense realism. It’s very forward looking, to say the least. And the first bottleneck is information theory as a unifying theory of material reality, with bridging laws to consciousness. (The IIT strikes me as an incredible first step in that direction.)

    Can I ask you to explain (or if you’ve already explained ad nauseum, point me to the appropriate post where you do explain) why it is that AI can never be conscious, even if it can fool us with its behavior. The IIT predicts otherwise, claiming that any system with the appropriate causal structure generates consciousness, no matter what the system is made of. Indeed the heart of the IIT is a conceptual holy trinity, an identity thesis between information, causation and consciousness…none of which can exist independent of each other.

    From what I gather, the multisense realism response is to basically put forward an elaborate new defense of Searle’s Chinese Room argument. Multisense Realism would want to say that the Chinese Room seems to know Chinese because the room is a kind of “reflection” of real Chinese speakers, and it itself is dark. Am I close?

    But I wonder, why doesn’t the “system’s reply” satisfy, as it does for most non-reductive functionalists?

    Or perhaps your reasons have little to do with Searle. Whatever the case, I actually think this is going to be a VERY important question in our lifetimes; we have to get this one right.

    • January 14, 2013 at 1:35 am

      I agree, there are some major obstacles to overcome before what I’m looking at could be investigated. Mainly I am trying to sketch out in broad strokes the general direction of a unified physics. I use a lot of neologisms, but I don’t expect them to stand or come into popular use so much as to provide interested pioneers with something to grab onto. I’m not opposed to being wrong about any of this, but I would need someone to come up with a good reason why the general principles of MR aren’t at least better than what we have so far at this point. In that sense, I’m not really making any claims as much as posing a question – ‘is there some reason that this model doesn’t work?’

      I agree the IIT is an incredible first step. When I was at the TSC conference last April I was circulating an idea to nominate a quantitative unit of consciousness called the Chalmeroff, which would locate conscious experiences on a continuum of private-facing and public-facing sense and logarithmic scale of significance.

      The only reason why AI can’t be conscious is that consciousness is really a human scale of awareness which arises exclusively from zoological, biological, and chemical experience. To try to impose an awareness which has been forged by sticky, greasy, gooey interactions onto a dry crystalline material assumes that awareness is non-physical. It assumes that awareness is generated by public structures in space rather than private experiences through time. To be clear, I’m not suggesting that if we made microprocessors out of sugars and lipids that it would change anything for AI, only that the fact that there is a difference between silicon wafers and hamburger that is non-trivial to the survival of living beings should be suspected as a symptom of deeper issues in the escalation of physical quality sense to chemical quality feeling and biological quality awareness.

      Another issue that my understanding points to is that consciousness is limited from the interior. You can’t teach a stone to feel like it is alive. When we impose a human logic on something which doesn’t even know how to arrange itself into living cells, we are exploiting a superficial reliability on the physical level. I did not know about Searle or the Chinese Room until after I had been doing MR for a while, but he did see, as did Leibniz and others, that there is a problem with reducing private awareness to the behavior of public structures. The map-territory relation hints at how easy it is for us to corrupt and mistake the boundary between things which have experiences and things which we use to reflect our own experience. The problem with IIT is that it assumes that information exists independently of awareness, when in fact, information can only ever be an experience of being informed. No byte of data has ever done anything on its own. Information is a shadow, it’s sterile and theoretical.

      The Chinese Room seems to know Chinese only because of the misplaced expectations of the audience. It is no different than attributing a personality to a ventriloquist’s dummy. Language is a multi-layered text, it has a verbal-audio layer, a phonetic layer, a grammatical layer, a graphic-visual layer, an alphabetic-syntactic layer, an intentional-logical layer, a non-verbal gestural layer, a poetic layer, a chronological-cultural layer, etc. What you get with a Searle book of Chinese conversational lookups is a recording of the graphic-visual layer which is common to any human being who has working eyes. The other layers are inferred by the audience, just as you infer my meaning by reading these words now, even though the routers and switches of the internet which provide you with these pixels are completely in the dark. The computer knows nothing. The semiconductors which are assembled together by us so that we can use it as a computer have, I think, physical level experiences (of perhaps holding and releasing of molecular oomph and crystalline precision) but no amount of complex configuration is going to coax a human or animal quality sensation out of the thing. It’s a sculpture which plays recordings of human sense, not a community of living cells which have divided themselves from a single organic molecular event.

      The system that does not create itself is only a system in name. There is no way, for example, that we could build a system of traffic signals on our roads that became self-aware and able to create new road by itself. The whole approach of AI so far has been to confuse the menu and the meal entirely. If this kind of functionalism were true, then I think that we should be living in a world where exotic spirits of information crowd every empty nook and cranny of matter or energy. It ignores the incredibly specific constraints on biology and human neurology and adopts a cavalier confidence in abstraction which has not played out so far. The increasing complexity of computers and robots has not made them more sympathetic or sensitive. Their ‘intelligence’ is still extremely thin and brittle, even if they can be programmed to spit out trivia answers or winning chess strategies. They still have no idea what is going on. Knowing what is going on, and caring about it, is something which cannot be programmed, it has to be be discovered personally.

    • January 14, 2013 at 1:43 am

      I forgot to say also that I think that what allows us to have human quality consciousness is mutually exclusive to what allows a machine to be programmed. We go to a lot of trouble to use extremely pure and ‘polite’ materials which will execute our purposes with no will of its own. We would have a hard time making a computer out of anything that has a will of its own. In our case, our entire body is made of the same cell, so we can imagine that on some level every cell has a will of its own and that this is the same will throughout. Such is not the case with an assembled machine.

      We could make AI out of biology, but we couldn’t control it. If we could, there would be an ethical question. Wouldn’t truly conscious AI be slavery?

  4. PhiGuy110
    January 16, 2013 at 11:38 pm

    Lots of food for thought here. I actually disagree with a lot of your conclusions but how you get there is itself a very elucidating thought process.

    A lot of where I disagree with your thoughts on AI is in your notion of “programmed.” I whole-heartedly concur that any attempt to “program” a computer to be conscious will fail. But this is not how scientists actually intend to generate AI. What they are looking at is a much more stochastic, “hand-off” approach in which the system is allowed to “evolve” itself and form its own connections using self-programmable neurologically inspired chips. I don’t see why this kind-of process couldn’t allow for silicon systems to “discover” their own consciousness much as carbon systems did. Now, perhaps biological material alone has the structural flexibility to form systems that can generate consciousness but we’ve no reason to believe that as of yet.

    I do think, however, that no system like IBM’s Watson can simply “come to life” by pumping it with more data or processing power. (Well, that’s not totally true, but I’ll get to that in a second.) And this is why Searle’s Chinese Room is so powerful…it makes us ask what language IS. In the end, I do think the system’s reply is correct for the Chinese Room but in a very bizarre way. If the Chinese Room could answer the Chinese questions it would have to “think” that it was a flesh and blood human with eyes, a body and a history, even though it isn’t. Language is an ABSTRACTION LAYER on something which is not itself linguistic, mainly, experience itself. The Chinese Room would have to have available to itself a network that believed, when activated, that it was a person, for only a person, a locus of human experience, can “understand” human language. Kurzweil and others have been talking about how Watson was “understanding” the language presented to it and I think this is actually a little dangerous. Watson shows how good the parlor trick can get, but I have no doubt that it had nothing like human consciousness associated with it; it had ZERO human understanding. The caveat, and it’s a weird one, is that I think maybe Watson COULD seemingly “come-to-life” but the character of its consciousness may be totally alien to us, perhaps it already is. If really developed though, its behavior would be totally unpredictable. Watson’s understanding, its sense of the world, would be unrelated to what we “project” into it. Its behavior may be no guide to its internal experience. I hope that’s not true; given that the development of consciousness appears to be like a ladder, not a random fireworks show, I doubt it is. Nonetheless, it’s a disturbing possibility.

    I suppose I just don’t find the “hamburger is different than silicon” argument convincing because it itself doesn’t deepen my understanding. I want to know what it is about hamburger, in principle, that allows it to have the necessary and sufficient conditions for consciousness, while silicon does not. I want to make SENSE of the brain that I have, this strange Chinese Room. Ultimately, I see the brain as a representation in consciousness, a map if you will, of the kind of causal system which generates consciousness, or at least structures it. In so far as the world is a sense making system, I don’t see why THIS sense can’t be subsumed under the light of consciousness like all other knowledge. It’s merely a very deep refinement of the relation of the “inner” to the “outer,” so that we can understand, under a conceptual umbrella, how consciousness is related, abstractly, to “brain” processes. It’s this abstraction away from the hamburger and to the network embedded within that I find most promising. It lets us see that consciousness is literally “made of” logic gates – made of sense. And logic gates are universal; they know no privileged substrate, though understanding how the organization of the network came to be cannot be understood outside the entire process of evolution, from the big bang on.

    We’ve been looking in the wrong direction with machine learning for a long time I think. Instead of impressing with chess and Jeopardy, we should be trying to get a robot to crawl and navigate space, discerning light from dark in a competitive environment, and otherwise create embodied tasks that replicate evolution. Our embodiment is key to our consciousness and there will be no mind in the machine until it recapitulates the process that got us from plankton, to fish to ape to man. This development process is probably some kind of universal law. At least I hope it is. Otherwise we may be in the odd scenario of interacting with machines that seem “intelligent” but have an internal life totally outre to anything we can imagine. This would be a very existentially troubling picture as our relation to reality is based on us forming some kind of theory of mind about the objects of our experience. A “smart” computer like HAL from 2001 better be conscious, or else an eerie feeling of solipsism may spread over humanity. Personally, I bet that certain behaviors in the universe, including the linguistic faculty, REQUIRE mature consciousness at our level or beyond. But again, ultimately, this belief is the only thing that allows me to relate to a world that I believe is populated by other minds like my own or lesser. It’s the behavior of other people and animals, not their wetware, that makes me believe they have a mind and helps me categorize what kind of mind I think that is.

    Finally, I don’t think you give the IIT enough credit. It DOES NOT assume that information exists independently of awareness. The IIT is clear: wherever there information there is consciousness. Wherever there is consciousness, there is information. Further, wherever there are those things, there is also causation. A system only system in name if it does’t have independent causal efficacy, that is to say, if it is a subject of consciousness. One experience is always one system. Consciousness, according to the IIT, is how to cut the world at it’s joints.

    It’s weird that humanity is about to have to go through these kinds of debates, but I think we are just at the beginning of a very bizarre human conversation.

    • January 17, 2013 at 1:18 am

      It’s hard to get from where we are in AI to the place that I’ve come to in MR. You really have to let go of absolutely every assumption about the universe as a place filled with objects and reconstruct it as an experience with bits that seem chunky from some perspectives.

      Once we agree to model the universe as an experience, then we can fill in that model a bit with qualitative levels of that experience; physical, chemical, biological, zoological, anthropological provide good break points. The problem with qualitative break points is that they are not logical in any way. No amount of quantitative fluctuations of measured energetic intensity would or could logically turn red or green. No configurations of atoms could begin to feel something like pain or pleasure if atoms were what we assume they are by their physical layer description. I think that what this means is that we have to evaluate biology on biological terms rather than information terms. Mathematically, the finality of death doesn’t make much sense. The uniqueness of life in its appearance from organic molecules only doesn’t make much sense, at least not obviously. The reason for that, I think (and all of this is my own speculation of course, if I don’t bother to say ‘I think’ it’s just because I’m tired of repeating it) is that we are used to looking at the tapestry from the wrong side. It isn’t that there is something magical about organic chemistry which allows biology, it is simply that organic chemistry is the signature representing biological experiences.

      In this chart: https://multisenserealism.files.wordpress.com/2012/01/excelchartmrpic.jpg

      I’ve tried to roughly break out the model in its public and private aspects and in its levels. Note that logic and sense are on opposite sides. What I mean by this is that sense is always a personal presentation while logic is a specific kind of sense which is always directed toward an impersonal re-presentation. This is the opposite of what you are getting at when you say ” It lets us see that consciousness is literally “made of” logic gates – made of sense”. Logic gates have no sense at all. The materials which make up a logic gate have sense, but by definition only a low level physical sense which will never evolve or learn on its own. The logic gate as a whole is only a canvas upon which human sense is projected. There is no experience of logic there, only a set of reliable conditions which we have implemented mechanically. The gate will continue doing the same thing forever and never care whether it is solving the mysteries of the universe or repeating the same two signals over and over.

      My point is not that hamburger is different than silicon, it is that the reasons why hamburger is different from silicon is not simply that they have a different structure, but that they represent different histories which date back to the beginning of time and foreshadow countless possible futures. They play different roles in the story of the universe, because, as you have to first agree, the universe is an experience and not a place. Places are a category of sensory-motor experience.

      I agree that robotics is the way to go to properly develop life-like behaviors, but again, logic is an impersonal look at behaviors, sense is a personal experience which drives behavior from the inside. When we approach it backwards, we amputate the authentic fertility of awareness which is grounded in the totality of existence from the beginning of time, and substitute a list of one dimensional routines. Although these routines can be nested and adapted to become quite sophisticated and even fool many of us some of the time, the marvel is that our own consciousness projects sentience on their behavior, not that they have actually progressed from unconscious mechanism into a conscious experience.

      As far as IIT, I do like it as a great way to move forward, but when you say “The IIT is clear: wherever there information there is consciousness.” I point to that as a problem. Bugs Bunny is information, but Bugs Bunny has no consciousness. The empty monotony of sand dunes in the desert is information, but it has no consciousness – at least, not on the level that we are looking at it. Taken independently of human presence, a beach moving at 100 years per second of human time might signify some kind of experience behind it in a geological context. Individual grains of sand on a molecular level, reflecting light and being tumbled by wind might have some kind of musical solitude that contributes to some experience. I think it’s a mistake though to presume that anything that we recognize as patterns, particularly patterns which we have designed explicitly to simulate our own behavior, actually refers to a consciousness which we imagine is present.

      I can’t emphasize enough that the systems which we associate with brains or bodies are indirect representations of experience, not producers of experience. The death of an animal can be caused by a breakdown in the body’s system, but strange luck or a particular will to live can cause body effects as well. Think of it not like a robot or Pinocchio that comes to life or ‘turns on’ but more like a movie which holds the attention of the audience. The relation between movie (top level awareness) and the audience (sub-personal awareness corresponding to neurons on the impersonal side) is bi-directional. The audience leaves when the movie is over, but also the movie will end (in this metaphor) when the audience leaves.

      The problem with silicon is that its an audience which can’t see or hear. It can buy tickets and it can count the number of frames, and because it is inhumanly patient, it can actually match the frame count with a list of supplied reactions (laugh, clap, boo) and perform those actions convincingly. If we put a camera on the audience, we might not be able to tell the difference.

    • January 17, 2013 at 1:31 am

      I think it’s important to focus on what we have actually seen so far in the history of life and the history of computation rather than on any confidence in theory. Life makes no sense as a theory. If inorganic systems could feel and think as organic persons, we must ask ourselves why we have not seen them do that in any way, under any circumstance. Rather than letting ourselves off the hook and assuming that some type of complexity threshold will make it all clear, I think we should honor the reality of the situation which we find ourselves in as human beings in a largely sterile and inhospitable universe, and not give our optimism the benefit of the doubt. By all means, those who are interested in pursuing the optimistic path to inorganic AI should not be dissuaded, because solving the ‘Easy Problem’ is practically and medically more important than solving the Hard Problem, but as far as an orienting theory of everything goes, I think that we cannot assume Pinocchio will ever come to life.

      • PhiGuy110
        January 17, 2013 at 6:50 am

        But do you believe that Pinnochio could truly, TRULY, fool us? Will behavior not be a guide to mind in the future? Or will computers always reveal themselves as mechanical in the end? Is conscious-like behavior unique to conscious entities?(FWIW, I hope so.)

      • January 17, 2013 at 1:13 pm

        I think that there will likely always at least be some people who are not fooled all of the time. Animation, puppetry, and acting can be very convincing, and fiction is not a simple matter of fooling children. Because we are magnifying and projecting our own human sense through the super-personal or super-signifying ranges of sense (which utilize archetypal content), our affinity for characters as beings is not entirely misplaced – only their literal location and attribution of autonomy is illusory. Bugs Bunny and Porky Pig are concretely real imaginary characters, just as our own friends are imaginary characters when we encounter them in dreams or thoughts, but they are also autonomous persons while they are living a human life.

        For me, it seems that the more we use computers to simulate reality, the more hollow and irritating it seems. The genius which can be accessed or reflected through the canvas of electronic computers is more likely to be found in the direction of simple iconic avatars, like Mario or PacMan. I do appreciate some of the spectacular CGI used to fill in for backgrounds in movies, but whenever it is used as a subject, I find myself leaning into the ‘uncanny valley’. I feel almost nauseous looking at whirling, exploding, transparent-glowing 3-D volumes. Nothing could seem less real to me than that. There is no charm or personality, just an expectation of audience-impressing spectacle – the fulfillment of some studio’s investment dollars.

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Shé Art

The Art of Shé D'Montford


Transform your life with Astrology

Be Inspired..!!

Listen to your inner self..it has all the answers..

Rain Coast Review

Thoughts on life... by Donald B. Wilson

Perfect Chaos

The Blog of Author Steven Colborne


Multimedia Project: Mettā Programming DNA


Astral Lucid Music - Philosophy On Life, The Universe And Everything...

I can't believe it!

Problems of today, Ideas for tomorrow

Rationalising The Universe

one post at a time

Conscience and Consciousness

Academic Philosophy for a General Audience


Exploring the Origins and Nature of Awareness


BRAINSTORM- An Evolving and propitious Synergy Mode~!

Paul's Bench

Ruminations on philosophy, psychology, life

This is not Yet-Another-Paradox, This is just How-Things-Really-Are...

For all dangerous minds, your own, or ours, but not the tv shows'... ... ... ... ... ... ... How to hack human consciousness, How to defend against human-hackers, and anything in between... ... ... ... ... ...this may be regarded as a sort of dialogue for peace and plenty for a hungry planet, with no one left behind, ever... ... ... ... please note: It may behoove you more to try to prove to yourselves how we may really be a time-traveler, than to try to disprove it... ... ... ... ... ... ...Enjoy!


“Don’t try to be different. Just be Creative. To be creative is different enough.”

Political Joint

A political blog centralized on current events


Zumwalt Poems Online

dhamma footsteps

postcards from the present moment

%d bloggers like this: