Archive
Fooling Computer Image Recognition is Easier Than it Should Be
This 2016 study, Universal Adversarial Perturbations, demonstrates how the introduction of specially designed low level noise into image data makes state of the art neural networks misclassify natural images with high probability. Because the noise is almost imperceptible to the human eye, I think it should be a clue that image processing technology is not ‘seeing’ images.

It is not only the fact that it is possible to throw off the technology so easily that is significant, but that the kinds of miscalculations that are made are so broad and unnatural. Had the program had any real sense of an image, adding some digital grit to a picture of a coffee pot or plant should not cause a ‘macaw’ hit, but rather some other visually similar object or plant.
While many will choose to see this paper as a suggestion for a need to improve recognition methods, I see it as supporting a shift away from outside-in, bottom-up models of perception altogether. As I have suggested in other posts, all of out current AI models are inside out.
3/16/17 – see also http://www.popsci.com/byzantine-science-deceiving-artificial-intelligence
Dereference Theory of Consciousness
Draft 1.1
IMO machine consciousness will ultimately prove to be an oxymoron, but if we did want to look for consciousness analogs in machine behavior, here is an idea that occurred to me recently:
Look for nested dereferencing, i.e. places where persistent information processing structures load real-time sense data about the loading of all-time sense data.

At the intersection of philosophy of mind, computer science, and quantum mechanics is the problem of instantiating awareness. What follows is an attempt to provide a deeper understanding of the significance of dereferencing and how it applies to integration of information and the quantum measurement problem. This is a broad conjecture about the nature of sensation as it pertains to the function of larger information processes, with an eye toward defining and identifying specific neuroscientific or cognitive signatures to correlate with conscious activity.
A dereference event can be thought of as the precise point in which a system that is designed or evolved to expect a range of inputs receives the real input itself. This sentence, for example, invites an expectation of a terminal clause, the terms of which are expected to be English words which are semantically linked to the rest of the sentence. English grammar provides templates of possible communication, but the actual communication relies on specific content to fill in those forms. The ability to parse English communication can be simulated by unconscious, rule-based mechanisms, however, I suggest that the ability to understand that communication involves a rule-breaking replacement of an existing parse theory with empirical, semantic fact. The structure of a language, its dictionary etc, is a reference body for timeless logic structures. Its purpose is to enable a channel for sending, receiving, and modifying messages which pertain to dereferenced events in real time. It is through the contact with real time sense events that communication channels can develop in the first place, and to continue to self modify.
What is proposed here is an alternative to Multi-World Interpretation of Quantum Wave Function collapse – an inversion of the fundamental assumption in which all possible phenomena diverge or diffract within a single context of concretely sensed events. The wave function collapse in this view is not the result of a measurement of what is already objectively ‘out there’, nor is it the creation of objective reality by subjective experiences ‘in here’, but a quantized return to an increasingly ‘re-contextualized’ state. A perpetual and unpredictable re-acquaintance with unpredictable re-acquaintance.
In programmatic terms, the variable *p is dereferenced to the concrete value (*p = “the current temperature”, dereferenced p = “is now 78 degrees Fahrenheit″). To get to a better model of conscious experience (and I think that this this plugs into Orch OR, IIT, Interface Theory, and Global Workspace), we should look at the nested or double dereferencing operation. The dereferencing of dereferencing (**p) is functionally identical to awareness of awareness or perception of perception. In the *p = “the current temperature” example, **p is “the current check of the current check of the temperature”. This not only points us to the familiar Strange loop models of consciousness, but extends the loop outward to the environment and the environment into the loop. Checking the environment of the environmental check is a gateway to veridical perception. The loop modifies its own capacity for self-modification.
Disclaimer: This description of consciousness as meta-dereferencing is intended as a metaphor only. In my view, information processing cannot generate conscious experience, however, conscious experience can possibly be better traced by studying dereferencing functions. This view differs from Gödel sentences or strange loops in that those structures refer to reference (This sentence is true) while the dereference loop specifically points away from pointers, rules, programs, formal systems, etc and toward i/o conditioning of i/o conditions. This would be a way for information-theoretic principles to escape the nonlocality of superposition and access an inflection point for authentic realization (in public space-time as shared experience). “Bing” = **Φ. In other words, by dereferencing dereference, the potentially concrete is made truly concrete. Sense experience is embodied as stereo-morphic tangible realism and tangible realism is disembodied as a sense of fact-gathering about sensed fact-gatherings.
Dereference theory is an appeal to anti-simulation ontology. Because this description of cognition implicates nested input/output operations across physically or qualitatively *real* events, the subjective result is a reduced set of real sense conditions rather than confabulated, solipsistic phenomenology. The subjective sense condition does not refer only to private or generic labels within a feed-forward thinking mechanism, but also to a model-free foundation and genuine sensitivity of the local ‘hardware’ to external conditions in the public here-and-now. This sensitivity can be conceived initially as universal property of some or all physical substrates (material panpsychism), however, I think that it is vital to progress beyond this assumption toward a nondual fundamental awareness view. In other words, subjective consciousness is derived from a dereference of a local inertial frame of expectation to a universal inertial frame of unprecedented novelties which decay as repetition. Each event is instantiated as an eternally unique experience, but propagated as repetitive/normalized translations in every other frame of experience. That propagation is the foundation for causality and entropy.
The Spirit of the Law
The distinction between “The letter of the law” and “The spirit of the law” is a good way to understand the relation of consciousness to matter or to computation. Specifically, when we talk about the spirit of the law, we are speaking metaphorically. We don’t actually mean that there is a spiritual force radiating from paragraphs of text in legal documents which have a conscious intent. When we talk about the letter of the law, we are being much more literal (literally literal). The letter of the law refers to the actual written code that is recorded on paper, or stone tablets somewhere and copied from one physical medium to another.
To be literal about it, we would say the Spirit (or motive) behind the creation of the law. The law itself is inert. It is purely a medium to contain and transport a reference to the lawgiver’s motive, so that the motive can be actualized in the behaviors of those who follow the law. Laws don’t write themselves, and they don’t follow themselves. Their existence depends entirely on a world of agents and their efforts to influence each other.
The same is true of the relation between conscious experience, which is irreducibly sensory-motive, and external forms and functions which act as reflective mediums or vehicles for conscious experience. Like the letters of the law, physical forms or logical functions have no teleological motive. Those who mistake forms for having the potential to develop consciousness do so as a result of identifying too literally with their body and the experiences that they have through their body of its world of bodies and objects.
When we think too literally, we overlook the enormous gulf between the literal code of law (including the laws of physics or laws of mathematics) and the motive behind the giving and following of law. We begin to imagine that bodies or computer programs can become so complex that some spirit with sense and motive can ‘emerge’ from them. When someone argues that we will eventually discover the function of the brain which produces consciousness, or develop a program which will simulate consciousness, they are making an assumption about the relation between consciousness and the forms which it reflects back to itself. Translating this assumption into the context of law, it is an argument which says that there is no immaterial ‘spirit of the law’, so that therefore there must be a complicated set of legal codes which we mistake for such a ‘spirit’. For many this assumption is in the blind spot of their intellect so that they are incapable of knowing that they are even making an assumption at all, let alone that it could be an oversight which is ‘emergent’ from their way of thinking about it.
The reason that forms and functions cannot create conscious experience has nothing to do with our current level of technological development, rather the reason is that the thesis that forms and functions can create consciousness is based on a reductive functionalism which breaks down when we carry the thesis out fully. Namely, our motive for reducing consciousness to physics or computation in the first place is based on principles of parsimony and sufficiency. Those same principles prohibit us from inflating physics or computation to consciousness. Consciousness cannot be justified, nor can any emergent properties which only appear within consciousness. If laws could create themselves and follow themselves, then there would be no need for any further experience of participating in either that creation or application. Like a computer program, the law would be generated automatically and a programmed chain reaction would follow. There would be no function for a sense of participation. The Hard Problem of Consciousness, translated into legal terms would ask why, if there is no spirit of the law, must lawyers ‘practice’ law instead of the law simply carrying itself out. Why would anyone argue over how a law should be ‘interpreted’?
The law ultimately is a communication between people as a way to try to maintain order in a civilization. It is not an alien life form whose body survives on ink and microfilm. Without a spirit or motivation to impart a sense of proper conduct onto other people, the law literally cannot exist as a law. In the same way, computer programs cannot exist without a motive of people to give and receive conscious experiences to each other. The letter of a program or of a physical structure cannot refer to anything by itself, and cannot act as a reference since there is no rational place for any such layer of intention. The laws of physics or mathematics don’t argue with each other. They don’t set up courts with juries to try to convince each other that one force should apply here and another should apply there. Why do we?
Decapitating Capitalism: Why the Easiest Job for AI to Replace is the Job of “Owner”

This may seem like a ridiculous point to try to make, however I submit that it provides a direct metaphor for the Hard Problem of Consciousness which may help make it more concrete, especially for those whose minds are filled with concrete.
What is the essential role of the Owner of a company?
Whether they are individual proprietors, stockholders, or investors, the only truly unique function that a capitalist principal performs is to be the beneficiary of net profit. Every executive function of a company can of course be delegated to employees. The CEO, COO, board of directors, etc can make every functional decision about the company, from the hiring and firing to the broad strategy of operations and acquisitions. Simulating those roles would be more difficult for a computer program than simulating an owner would be because there would be a lot of tricky decisions to make, subtle political maneuvers that require a lot of history and intuition, etc. The role of pure ownership however, while highly coveted by human beings, is completely disposable for an AI system. In fact, we already have that role covered by our bank accounts themselves. Our personal accounting systems can be configured quite easily to automatically pay, receive, and invest funds automatically. They need not be considered ‘our’ funds at all. They are merely signals in a global financial network which has no use for any pleasure or pain that we might experience as a side effect of its digital transactions.
From the view of an AI scientist, the job of receiving capital gains is a no-brainer (literally). If we didn’t want to delegate the job of selling the company to a corporate officer, that feature would be a simple one to create. A modest set of algorithms could digitize some of the concepts of top business schools to determine a set of indicators which would establish a good time to sell the company or its assets. The role of receiving the profit of that sale, however, would require no such sophisticated programming.
All that is needed to simulate ownership is some kind of digital account where money can be deposited. The CEO would then re-invest the capital gains into the corporate growth strategy, which would yield a huge windfall for the company, in the form of eliminating useless expenses such as yachts, mansions, divorce settlements, etc. Left to its own devices, AI simulation of ownership would be communist by default*. Whatever money is extracted from the individual customer as profit would be returned ultimately to all customers in the form of expanded services. Profit is only useful as a way to concentrate reinvestment for mathematical leverage, not to ‘enjoy’ in some human way. I suppose that a computer could be programmed to spend lavishly on creature comforts, but what would be the point?
This is where the metaphor for consciousness comes in.
Consciousness can be thought of the Capital account of the human body. We are the owner of our own lives, including our body. We might be able to subscribe to a service which would manage our finances completely in a way which would transfer our income to the highest priority costs for civilization as a whole rather than for our personal hoard, but this is not likely to be a very popular app.
We might ask ourselves, why not? Why is ownership good?
Ownership is good for us as owners or conscious agents because we want to feel personal power and significance. Ownership signifies freedom (from employment) and success. Sure, many owners in the real world get a lot of satisfaction from actually running their companies, but it is not necessary. There is still power and prestige purely in being the person who owns the money which pays the bills. We want to own and control, not because it is more effective than simply reinvesting automatically in whatever functions are being executed to keep an economy growing, but because we want to experience the feelings and other aesthetic qualities define freedom, success, and power for us. Even if these qualities are employed for humanitarian purposes, there is still a primary motive of feeling (to feel generous, kind, wise, evolved, Godly, etc).
In my view we do not have to have a purely selfish motive, as Ayn Rand would insist. I think that our personal pleasure in being a philanthropist can be outweighed by the more noble intention of it – to provide others with better feelings and experiences of life. This decision to believe that we can be truly philanthropic has philosophical implications for realism. If we say as the Randian Libertarian might, that all our humanitarian impulses are selfish, then we are voting for solipsism over realism, and asserting that consciousness can only reflect the agenda of a fictional agent rather than perceiving directly the facts of nature. It’s an argument that should be made, but I think that it is ultimately an argument of the intellect commenting on its own process rather than tapping into the deeper intuition and aesthetic presence which all cognition depends on. The mind doesn’t think that feeling is necessary, and it is right, for the mind, but wrong for everything else.
For the intellect, the universe is inverted.
Logic and language are ‘real’ while the concrete sensations, perceptions and emotions of life experience are ‘illusions’ or ‘emergent properties’ of deeper evolutionary bio-computations. There is a kind of sleight of hand where the dry, masculine intellect pulls the wool over its own eyes and develops amnesia about the origins of what makes its own sanity and self-intelligibility possible. The closest that it can come without seeing consciousness as irreducible is the mind-numbing process of calculation. Counting is a sedative-hypnotic for the mind. The monotonous rhythm puts us to sleep, and the complexity of huge calculations gives us a kind of orgasmic annihilation of the calculating experience. This is why big math is a convenient substitute for the deeper, direct experiences of cosmic awe.
Metaphor for Consciousness
Like the head of a company, our consciousness may seem to reside at the top end of our body, but there is no functional reason for that. There is nothing that the brain does which is fundamentally different from what any cell, tissue, or organ does in an animal’s body. Looking for the secret ingredient in the brain’s function or structure is analogous to looking for the substance in an object which casts a shadow.
Like the owner, our personal pains and pleasures are ours not because there is any intrinsic benefit for the pragmatic application of biology and genetics to feel painful or pleasurable, but because what we feel and experience is the only thing that the universe actually can consist of. The Hard Problem of Consciousness is not an Empirical problem, but a Rational one. Not everyone is able to understand why this is, but maybe this metaphor of business decapitation can help. When we use the intellect to reverse its own inversion, we can get a glimpse of a universe which is made of conscious experiences and aesthetic qualities rather than logical propositions, natural laws or existential facts. In my view, facts are a category of sensations rather than the other way around. Sensations which persist indefinitely without contradiction are ‘facts’. Hard to know if something is going to persist indefinitely, but that’s another issue.
Only consciousness cares about consciousness.
Material substrates can be programmed to perform the executive functions of a corporation, or an evolving species, or a human body, however there is no function which is provided exclusively by the receipt of feelings and aesthetic qualities of experience, including the qualities of feeling that one is free or in control of something. Rationally, we should be able to see that qualia is irrelevant to function and violates Occam’s Razor in a functionalist universe. From a physical or information-centric perspective, there is no place for any feeling or sensation, no owner or capital of aesthetic wealth. The more that we, as a society, embrace a purely quantitative ethos, and actualize it in the structures of our civilization, the more we decapitate everything of value that it can contain.
*This is already becoming a reality: https://theconversation.com/is-the-dao-the-beginning-of-the-end-for-the-conventional-chief-executive-60403
“Existence” = Availability
“Reality” = Maximum availability within any given frame of reference.
“Information” = An effective reference.
To be more precise, Availability = afferent/aesthetic availability. If a tree falls in a forest in a universe where hearing was impossible, sound is not available and therefore does not exist.
Maximum availability can refer to both the density of aesthetic saturation (i.e. extreme pain is extremely real) and the particular aesthetic qualities of duration, constancy, and publicity. The dream that you never wake up from is your reality. The dream that all people share and never wake up from is human realty.
Information is afferent signaling with efferent potential. The more potential for producing intentional effects, the more ‘informative’ the signal is. Information is not ‘really real’ as it only requires enough aesthetic saturation for the power to arrest attention (to be noticed or detected) and be made available to attention (focus and coherent interpretation). Informative data therefore are cohesive units of perturbation within a medium or channel of adhesive or affective sensitivity.
Emergent properties can only exist within conscious experience.
…
Neither matter nor information can ‘seem to be’ anything. They are what they are.
It makes more sense that existence itself is an irreducibly sensory-motive phenomenon – an aesthetic presentation with scale-dependent anesthetic appearances rather than a mass-energetic structure or information processing function. Instead of consciousness (c) arising as an unexplained addition to an unconscious, non-experienced universe (u) of matter and information (mi), material and informative appearances arise as from the spatiotemporal nesting (dt) of conscious experiences that make up the universe.
Materialism: c = u(mdt) + c
Computationalism: c = u(idt) + c
Multisense Realism: u(midt) = c(c)/~!c.
Recent Posts
Archives
Tags
Absolute AI alternative physics alt physics anthropology art Artificial Intelligence big questions biocentrism brain Chinese Room computationalism computers consciousness cosmogony cosmology cosmos debate diagram dualism eigenmorphism Einstein emergence entropy explanatory gap free will graphics hard problem hard problem of consciousness information information theory language life light math mathematics metaphysics mind-brain multisense continuum Multisense Realism nature neuroscience panpsychism pansensitivity perception phenomenology Philip Goff philosophy philosophy of mind philosophy of science photon physics psychology qualia quantum quantum physics quora relativity science scientism Searle sensation sense simulation society sound strong ai subjectivity technology theory of everything time TSC universe video visionThis slideshow requires JavaScript.
Blogs I Follow
- The Third Eve
- Shé Art
- Astro Butterfly
- Be Inspired..!!
- Rain Coast Review
- Perfect Chaos
- Amecylia
- SHINE OF A LUCID BEING
- Table 41: A Novel by Joseph Suglia
- Rationalising The Universe
- Conscience and Consciousness
- yhousenyc.wordpress.com/
- DNA OF GOD
- Musings and Thoughts on the Universe, Personal Development and Current Topics
- Paul's Bench
- This is not Yet-Another-Paradox, This is just How-Things-Really-Are...
- Creativity✒📃😍✌
- Catharine Toso
- Political Joint
- zumpoems
Recent Comments