It sounds like a silly proposition and it is a little tongue in cheek, but trying to come up with an answer could have ramifications for the sciences of consciousness and sentience.
Years ago cosmologist Paul Davies talked about a theory that organic matter and therefore life was intrinsically no different from inorganic matter – the only difference was the amount of complexity.
So when a system gets sufficiently complex enough, the property we know (but still can’t define) as ‘life’ might emerge spontaneously like it did from amino acids and proteins three billion years ago.
We have such a system today in the internet. As far back as 2005 Kevin Kelly talked about how the internet would soon have as many ‘nodes’ as a human brain. It’s even been written about in fiction in Robert J Sawyer’s Wake series (
And since human consciousness and all the deep abstract knowledge, creativity, love, etc it gives us arises from a staggering number of deceptively simple parts, couldn’t the same thing happen to the internet (or another sufficiently large and complex system)?
I’m trying to crowdsource a series of articles on the topic () and I know this isn’t the place to advertise, but even though I’d love everyone who reads this to back me I’m more interested in getting more food for thought from any responses should I get the project off the ground
I think that the responses here are going to tend toward supporting one of two worldviews. In the first worldview, the facts of physics and information science lead us inevitably to conclude that consciousness and life are purely a matter of particular configurations of forms and functions. Whether those forms and functions are strictly tied to specific materials or they are substrate independent and therefore purely logical entities is another tier of the debate, but all those who subscribe to the first worldview are in agreement: If a particular set of functions are instantiated, the result will be life and conscious experience.
The second worldview would include all of those who suspect that there is something more than that which is required…that information or physics may be necessary for life, but not sufficient. That worldview can be divided further into those who think that the other factor is spiritual or supernatural, and those who think that it is an as-yet-undiscovered factor. Those in the first worldview camp might assert that the second worldview is unlikely or impossible because of
1) Causal Closure eliminates non-physical causes of physical phenomena
2) Bell’s Theorem eliminates hidden variables (including vital essences)
3) Church-Turing Thesis supports the universality of computation
1) Causal Closure – The idea that all physical effects have physical causes can either be seen as an iron clad law of the universe, or as a tautological fallacy that begs the question of materialism. On the one hand, adherents to the first worldview can say that if there were any non-physical cause to a physical effect, we would by definition see the effect of that cause as physical. There is simply no room in the laws of physics for magical, non-local forces as the tiniest deviation in experimental data would show up for us as a paradigm shifting event in the history of physics.
On the other hand, adherents of the second view can either point to a theological transcendence of physics which is miraculous and is beyond physical explanation, or they can question the suppositions of causal closure as biased from the start. Since all physical measurements are made using physical instruments, any metaphysical contact might be minimized or eliminated.
It could be argued that physics is like wearing colored glasses, so that rather than proving that all phenomena can be reduced to ‘red images’, all that it proves is that working with the public-facing exteriors of nature yields a predictably public-facing exterior logic. Rather than diminishing the significance of private-facing phenomenal experience, it may be physics which is the diminished ‘tip of the iceberg’, with the remaining bulk of the iceberg being a transphysical, transpersonal firmament. Just as we observe the ability of our own senses to ‘fill-in’ gaps in perceptual continuity, it could be that physics has a similar plasticity. Relativity may extend beyond physics, such that physics itself is a curvature of deeper conscious/metaphysical attractors.
Another alternative to assuming causal closure is to see the different levels of description of physics as semi-permeable to causality. Our bodies are made of living cells, but on that layer of description ‘we’ don’t exist. A TV show doesn’t ‘exist’ on the level of illuminated pixels or digital data in a TV set. Each level of description is defined by a scope and scale of perception which is only meaningful on that scale. If we apply strong causal closure, there would be no room for any such thing as a level of description or conscious perspective. Physics has no observers, unless we smuggle them in as unacknowledged voyeurs from our own non-physically-accounted-for experience.
To my mind, it’s difficult to defend causal closure in light of recent changes in astrophysics where the vast bulk of the universe’s mass has been suddenly re-categorized as dark energy and dark matter. Not only could these newly minted phenomena be ‘dark’ because they are metaphysical, but they show that physics cannot be counted on to limit itself to any particular definition of what counts as physics.
2) Here’s a passage about Bell’s Theorem which says it better than I could:
“Bell’s Theorem, expressed in a simple equation called an ‘inequality’, could be put to a direct test. It is a reflection of the fact that no signal containing any information can travel faster than the speed of light. This means that if hidden-variables theory exists to make quantum mechanics a deterministic theory, the information contained in these ‘variables’ cannot be transmitted faster than light. This is what physicists call a ‘local’ theory. John Bell discovered that, in order for Bohm’s hidden-variable theory to work, it would have to be very badly ‘non-local’ meaning that it would have to allow for information to travel faster then the speed of light. This means that, if we accept hidden-variable theory to clean up quantum mechanics because we have decided that we no longer like the idea of assigning probabilities to events at the atomic scale, we would have to give up special relativity. This is an unsatisfactory bargain.”
From an other article ( )
As Bell proved in 1964, this leaves two options for the nature of reality. The first is that reality is irreducibly random, meaning that there are no hidden variables that “determine the results of individual measurements”. The second option is that reality is ‘non-local’, meaning that “the setting of one measuring device can influence the reading of another instrument, however remote”.
If particles are, as Fritjof Capra said “tendencies to exist”, then the ground of being may be conceived of as a ‘pretend’-ency to exist. This makes sense to me, since we experience with our own imagination a constant stream of interior rehearsals for futures that might never be and histories that probably didn’t happen the way that we think. Rather than thinking of our own intellect as purely a vastly complex system on a biochemical scale, we may also think of it as a vastly simple non-system, like a monad, of awareness which is primordial and fundamentally inseparable from the universe as a whole.
3) Church-Turing Thesis has to do with computability and whether all functions of mathematics can be broken down to simple arithmetic operations. If we accept it as true, then it can be reasoned through the first worldview that since the brain is physical, and physics can be modeled mathematically, then there should be no reason why a brain cannot be simulated as a computer program.
There are some possible problems with this:
a) The brain and its behavior may not be physically complete. There are a lot of theories about consciousness and the brain. Penrose and Hameroff’s quantum consciousness postulates that consciousness depends on quantum computations within cytoskeletal structures called microtubules. In that case, what the brain does may not be entirely physically accessible. According to Orch OR, the brain’s behavior can be caused ultimately by quantum wavefunction collapse through large scale Orchestrated Objective Reductions. Quantum events of this sort could not be reproduced or measured before they happen, so there is no reason to expect that a computer modeling of a brain would work.
b) Consciousness may not be computable. Like Bell’s work in quantum mechanics, mathematics took an enigmatic turn with Gödel’s Incompleteness Theorem. Long story short, Gödel showed that there are truths within any axiomatic system which cannot be proved without reaching outside of that system. Formal logic is incomplete. Like Bell’s inequality, incompleteness can take us into a world where either epistemology breaks down completely and we have no way of ever knowing whether what we know is true, or we are compelled to consider that logic itself is dependent upon a more transcendent, Platonic realm of arithmetic truth.
This leads to another question about whether even this kind of super-logical truth is the generator of consciousness or whether consciousness of some sort is required a priori to any formulation of ‘truth’. To me, it makes no sense for there to be truths which are undetectable, and it makes no sense for an undetectable truth to develop sensation to detect itself, so I’m convinced that arithmetic truth is a reduction of the deeper ground of being, which is not only logical and generic, but aesthetic and proprietary. Thinking is a form of feeling, rather than the other way around. No arithmetic code can produce a feeling on its own.
c) Computation may not support awareness. Those who are used to the first worldview may find this prospect to be objectionable, even offensive to their sensibilities. This in itself is an interesting response to something which is supposed to be scientific and unsentimental, but that is another topic. Sort of. What is at stake here is the sanctity of simulation. The idea that anything which can be substituted with sufficiently high resolution is functionally identical to the original is at the heart of the modern technological worldview. If you have a good enough cochlear implant, it is thought, of course it would be ‘the same as’ a biological ear. By extension, however, that reasoning would imply that a good enough simulation of glass of water would be drinkable.
It seems obvious that no computer generated image of water would be drinkable, but some would say that it would be drinkable if you yourself also existed in that simulation. Of course, if that were the case, anything could be drinkable, including the sky, the alphabet, etc, whatever was programmed to be drinkable in that sim-world.
We should ask then, since computational physics is so loose and ‘real’ physics is so rigidly constrained, does that mean that physics and computation are a substance dualism where they cannot directly interact, or does it mean that physics is subsumed within computation, so that our world is only one of a set of many others, or every other possible world (as in some MWI theories).
d) Computation may rely on ungrounded symbols. Another topic that gets a lot of people very irritated is the line of philosophical questioning that includes Searle’s Chinese Room and Leibniz Mill Argument. If you’ve read this far, you’re probably already familiar with these, but the upshot is that parsimony compels us to question that any such thing as subjective experience could be plausible in a mechanical system. Causal closure is seen not only to prohibit metaphysics, but also any chance of something like consciousness emerging through mechanical chain reactions alone.
Church-Turing works in the opposite way here, since all mechanisms can be reduced to computation and all computation can be reduced to arithmetic steps, there is no way to justify extra-arithmetic levels of description. If we say that the brain boils down to assembly language type transactions, then we need a completely superfluous and unsupportable injection of brute emergence to inflate computation to phenomenal awareness.
The symbol grounding problem shows how symbols can be manipulated ‘apathetically’ to an arbitrary degree of sophistication. The passing of the Turing test is meaningless ultimately since it depends on a subjective appraisal of a distant subjectivity. There isn’t any logical reason why a computer program to simulate a brain or human communication would not be a ‘zombie’, relying on purely quantitative-syntactic manipulations rather than empathetic investment. Since we ourselves can pretend to care, without really caring, we can deduce that there may be no way to separate out a public-facing effect from a private-facing affect. We can lie and pretend and say words that we don’t mean, so we cannot naively assume that just because we build a mouth which parrots speech that meaning will spontaneously arise in the mouth, or the speech, or the ‘system’ as a whole.
In the end, I think that we can’t have it both ways. Either we say that consciousness is intrinsic and irreducible, or we admit that it makes no sense as a product of unconscious mechanisms.
The question of whether the internet could come to life is, to me, only different from the question of whether Pinocchio could become a real boy in that there is a difference in degree. Pinocchio is a three dimensional puppet which is animated through a fourth dimension of time. The puppeteer would add a fifth dimension to that animation, lending their own conscious symbol-grounding to the puppet’s body intentionally. The puppet has no awareness of its own. What is different about an AI is that it would take the fifth dimensional control in-house as it were.
It gets very tricky here, since our human experience has always been with other beings that are self-directed to be living beings which are conscious or aware to some extent. We have no precedent in our evolution to relate to a synthetic entity which is designed explicitly to simulate the responses of a living creature. So far, what we have seen does not support, in my opinion, any fundamental progress. Pinocchio has many voices and outfits now, but he is still wooden. The uncanny valley effect gives us a glimpse in how we are intuitively and aesthetically repulsed by that which pretends to be alive. At this point, my conclusion is that we have nothing to fear from technology developing its own consciousness, no more than we have of books beginning to write their own stories. There is, however, a danger of humans abdicating their responsibility to AI systems, and thereby endangering the quality of human life. Putting ‘unpersons’ in charge of the affairs of real people may have dire consequences over time.
My rebuttal to this from New Empiricism
Information is one of the most poorly defined terms in philosophy but it is a well defined concept in physical theory. How can it be that a clear idea in one branch of knowledge can be murky in another?
The physical meaning of information is succinctly summarised in the Wikibook on “Consciousness Studies”:
“The number of distinguishable states that a system can possess is the amount of information that can be encoded by the system.”
In most cases a “state of a system” boils down to arrangements of objects, either material objects laid out in the world or sequences of objects such as the succession of signals in a telephone line. So information is represented by physical things laid out in space and time. There is no information without this representation as an arrangement of physical objects.
Information can be processed by machines. As an example, computers use the “distinguishable states” of charge in electrical components to perform a host of useful tasks. They use the state of electrical charge in electronic components because charge can be manipulated rapidly and can be impressed on tiny components, however, computers could use the states of steel balls in boxes or carrots flowing on conveyor belts to achieve the same effect, albeit more slowly. There is nothing special about electronic computers beyond their speed, complexity and compactness. They are just machines that contain three dimensional arrangements of matter.
Philosophers use information in a much less well-defined fashion. Philosophical information is far more fuzzy and involves the quality of things such as hardness or blueness. So how does philosophical blueness differ from a physical information state?
Physical information about the world is a generalised state change that is related to particular events in the world and could be impressed on any substrate such as steel balls etc.. This allows information to be transmitted from place to place. As an example, a heat sensor in England could trigger a switch that opens a trapdoor that drops a ball that is monitored on a camera that causes changes in charge patterns in a computer that are transmitted as sounds on a radio in the USA. If the sound on the radio makes a cat jump and knock over a vase then it is probably valid to look at the vase and say “its hot in England”. So physical information is related to its source by the causal chain of preceding steps. Notice that each of these steps is a physical event so there is no information without representation as a state in the real world.
In the philosophical idea of information “hot” or “cold” are particular states in the mind. Our mental states are not uniquely related to the state of the world outside our bodies. As an example, human heat sensors are fickle so a blindfolded person might contain the state called “cold” when their hand is placed in water at 60 degrees or ice water at zero degrees. Our “cold” is subjective and does not have a fixed reference point in the world. Our own information is a particular state that could be induced by a variety of events in the world whereas physical information can be a variety of states triggered by a particular event in the world.
To summarise, information in physics is a state change in any substrate. It can be related to the state change in another substrate if a causal chain exists between the two substrates. Information in the mind is the state of the particular substrate that forms your particular mind.
Your mind is a state of a particular substrate but a “state” is an arrangement of events. The crucial questions for the scientist are “what events?” and “how many independent directions can be used for arranging these events?”. We can tell from our experience that at least four independent axes (or “dimensions”) are involved.
The fact that there is no information without representation of the information as a physical state means that peculiar non-physical claims such as Cartesian Dualism and Dennett’s “logical space” are not credible.
Daniel C Dennett. (1991). Consciousness Explained. Little, Brown & Co. USA. Available as a Penguin Book.
Dennett says: “So we do have a way of making sense of the idea of phenomenal space – as a logical space. This is a space into which or in which nothing is literally projected; its properties are simply constituted by the beliefs of the (heterophenomenological) subject.” Dennett is wrong because if the space contains information then it must be instantiated as a physical entity, if it is not instantiated then it does not exist and Dennett is simply denying the experience that we all share to avoid explaining it. Either we have simultaneous events or are just a single point, if we have simultaneous events the space of our experience exists.
“So information is represented by physical things laid out in space and time.”
Why would physical things ‘represent’ anything though? Without some sensory interpretation that groups such things together so that they appear “laid out in space and time”, who is to say that there could be any ‘informing’ going on?
“computers use the “distinguishable states” of charge in electrical components to perform a host of useful tasks.”
Useful to whom? The beads of an abacus can be manipulated into states which are distinguishable by the user, but there is no reason to assume that this informs the beads, or the physical material that the beads are made of. Computers do not compute to serve their own sense or motives, they are blind, low level reflectors of extrinsically introduced conditions.
“Your mind is a state of a particular substrate but a “state” is an arrangement of events. ”
States and arrangements are not physical because they require a mode of interpretation which is qualitative and aesthetic. Just as there can be no disembodied information, there can be no ‘states’ or ‘arrangements’ which are disentangled from the totality of sensible relations, and from specific participatory subsets therein. Information is a ghost – an impostor which reflects this totality in a narrow quantitative sense which is eternal but metaphysical, and a physical sense which is tangible and present but in which all aesthetic qualities are reduced to a one dimensional schema of coordinate permutation. Neither information nor physics can relate to each other or represent anything by themselves. It is my view that we should flip the entire assumption of forms and functions as primitively real around, so that they are instead derived from a more fundamental capacity to appreciate sensory affects and participate in motivated effects. The primordial character of the universe can only be, in my view metaphenomenal, with physics, information, and subjectivity as sensible partitions of the whole.
The underlying Symbol Grounding Problem common to all three is that from a purely quantitative perspective, a logical truth can only satisfy some explicitly defined condition. The expectation of truth itself being implicitly true, (i.e. that it is possible to doubt what is given) is not a condition of truth, it is a boundary condition beyond truth*. All computer malfunctions, we presume, are due to problems with the physical substrate, or the programmer’s code, and not incompetence or malice. The computer, its program, or binary logic in general cannot be blamed for trying to mislead anyone. Computation, therefore, has no truth quality, no expectation of validity or discernment between technical accuracy and the accuracy of its technique. The whole of logic is contained within the assumption that logic is valid automatically. It is an inverted mirror image of naive realism. Where a person can be childish in their truth evaluation, overextending their private world into the public domain, a computer is robotic in its truth evaluation, undersignifying privacy until it is altogether absent.
Because computers can only report a local fact (the position of a switch or token), they cannot lie intentionally. Lying involves extending a local fiction to be taken as a remote fact. When we lie, we know what a computer cannot guess – that information may not be ‘real’.
When we say that a computer makes an error, it is only because of a malfunction on the physical or programmatic level, therefore it is not false, but a true representation of the problem in the system which we receive as an error. It is only incorrect in some sense that is not local to the machine, but rather local to the user, who makes the mistake of believing that the output of the program is supposed to be grounded in their expectations for its function. It is the user who is mistaken.
It is for this same reason that computers cannot intend to tell the truth either. Telling the truth depends on an understanding of the possibility of fiction and the power to intentionally choose the extent to which the truth is revealed. The symbolic communication expressed is grounded strongly in the privacy of the subject as well as the public context, and only weakly grounded in the logic represented by the symbolic abstraction. With a computer, the hierarchy is inverted. A Turing Machine is independent of private intention and public physics, so it is grounded absolutely in its own simulacra. In Searle’s (much despised) Chinese Room Argument – the conceit of the decomposed translator exposes how the output of a program is only known to the program in its own narrow sensibility. The result of the mechanism is simply a true report of a local process of the machine which has no implicit connection to any presented truths beyond the machine…except for one: Arithmetic truth.
Arithmetic truth is not local to the machine, but it is local to all machines and all experiences of correct logical thought. This is an interesting symmetry, as the logic of mechanism is both absolutely local and instantaneous and absolutely universal and eternal, but nothing in between. Every computed result is unique to the particular instantiation of the machine or program, and universal as a Turing emulable template. What digital analogs are not is true or real any sense which relates expressly to real, experienced events in space time. This is the insight expressed in Korzybski’s famous maxim ‘The map is not the territory.’ and in the Use-Mention distinction, where using a word intentionally is understood to be distinct from merely mentioning the word as an object to be discussed. For a computer, there is no map-territory distinction. It’s all one invisible, intangible mapitory of disconnected digital events.
By contrast, a person has many ways to voluntarily discern territories and maps. They can be grouped together, such as when the acoustic territory of sound is mapped to the emotional-lyric territory of music, or the optical territory of light is mapped as the visual territory of color and image. They can be flipped so that the physics is mapped to the phenomenal as well, which is how we control the voluntary muscles of our body. For us, authenticity is important. We would rather win the lottery than just have a dream that we won the lottery. A computer does not know the difference. The dream and the reality are identical information.
Realism, then, is characterized by its opposition to the quantitative. Instead of being pegged to the polar austerity which is autonomous local + explicitly universal, consciousness ripens into the tropical fecundity of middle range. Physically real experience is in direct contrast to digital abstraction. It is semi-unique, semi-private, semi-spatiotemporal, semi-local, semi-specific, semi-universal. Arithmetic truth lacks any non-functional qualities, so that using arithmetic to falsify functionalism is inherently tautological. It is like asking an armless man to raise his hand if he thinks he has no arms.
Here’s some background stuff that relates:
The Hangman Paradox has been described as follows:
A judge tells a condemned prisoner that he will be hanged at noon on one weekday in the following week but that the execution will be a surprise to the prisoner. He will not know the day of the hanging until the executioner knocks on his cell door at noon that day.Having reflected on his sentence, the prisoner draws the conclusion that he will escape from the hanging. His reasoning is in several parts. He begins by concluding that the “surprise hanging” can’t be on Friday, as if he hasn’t been hanged by Thursday, there is only one day left – and so it won’t be a surprise if he’s hanged on Friday. Since the judge’s sentence stipulated that the hanging would be a surprise to him, he concludes it cannot occur on Friday.He then reasons that the surprise hanging cannot be on Thursday either, because Friday has already been eliminated and if he hasn’t been hanged by Wednesday night, the hanging must occur on Thursday, making a Thursday hanging not a surprise either. By similar reasoning he concludes that the hanging can also not occur on Wednesday, Tuesday or Monday. Joyfully he retires to his cell confident that the hanging will not occur at all.The next week, the executioner knocks on the prisoner’s door at noon on Wednesday — which, despite all the above, was an utter surprise to him. Everything the judge said came true.
1) The conclusion “I won’t be surprised to be hanged Friday if I am not hanged by Thursday” creates another proposition to be surprised about. By leaving the condition of ‘surprise’ open ended, it could include being surprised that the judge lied, or any number of other soft contingencies that could render an ‘unexpected’ outcome. The condition of expectation isn’t an objective phenomenon, it is a subjective inference. Objectively, there is no surprise since objects don’t anticipate anything.
2) If we want to close in tightly on the quantitative logic of whether deducibility can be deduced – given five coin flips and a certainty that one will be heads, each successive tails coin flip increases the odds that one the remaining flips will be heads. The fifth coin will either be 100% likely to be heads, or will prove that the certainty assumed was 100% wrong.
I think the paradox hinges on 1) the false inference of objectivity in the use of the word surprise and 2) the false assertion of omniscience by the judge. It’s like an Escher drawing. In real life, surprise cannot be predicted with certainty and the quality of unexpectedness it is not an objective thing, just as expectation is not an objective thing.
Connecting the dots, expectation, intention, realism, and truth are all rooted in the firmament of sensory-motive participation. To care about what happens cannot be divorced from our causally efficacious role in changing it. It’s not just a matter of being petulant or selfish. The ontological possibility of ‘caring’ requires letters that are not in the alphabet of determinism and computation. It is computation which acts as punctuation, spelling, and grammar, but not language itself. To a computer, every word or name is as generic as a number. They can store the string of characters that belong to what we call a name, but they have no way to really recognize who that name belongs to.
*I maintain that what is beyond truth is sense: direct phenomenological participation