Could the Internet come to life?
Could the Internet come to life?
It sounds like a silly proposition and it is a little tongue in cheek, but trying to come up with an answer could have ramifications for the sciences of consciousness and sentience.
Years ago cosmologist Paul Davies talked about a theory that organic matter and therefore life was intrinsically no different from inorganic matter – the only difference was the amount of complexity.
So when a system gets sufficiently complex enough, the property we know (but still can’t define) as ‘life’ might emerge spontaneously like it did from amino acids and proteins three billion years ago.
We have such a system today in the internet. As far back as 2005 Kevin Kelly talked about how the internet would soon have as many ‘nodes’ as a human brain. It’s even been written about in fiction in Robert J Sawyer’s Wake series (Page on sfwriter.com)
And since human consciousness and all the deep abstract knowledge, creativity, love, etc it gives us arises from a staggering number of deceptively simple parts, couldn’t the same thing happen to the internet (or another sufficiently large and complex system)?
I’m trying to crowdsource a series of articles on the topic (Could the Internet come to life? by Drew Turney – Beacon) and I know this isn’t the place to advertise, but even though I’d love everyone who reads this to back me I’m more interested in getting more food for thought from any responses should I get the project off the ground
I think that the responses here are going to tend toward supporting one of two worldviews. In the first worldview, the facts of physics and information science lead us inevitably to conclude that consciousness and life are purely a matter of particular configurations of forms and functions. Whether those forms and functions are strictly tied to specific materials or they are substrate independent and therefore purely logical entities is another tier of the debate, but all those who subscribe to the first worldview are in agreement: If a particular set of functions are instantiated, the result will be life and conscious experience.
The second worldview would include all of those who suspect that there is something more than that which is required…that information or physics may be necessary for life, but not sufficient. That worldview can be divided further into those who think that the other factor is spiritual or supernatural, and those who think that it is an as-yet-undiscovered factor. Those in the first worldview camp might assert that the second worldview is unlikely or impossible because of
1) Causal Closure eliminates non-physical causes of physical phenomena
2) Bell’s Theorem eliminates hidden variables (including vital essences)
3) Church-Turing Thesis supports the universality of computation
1) Causal Closure – The idea that all physical effects have physical causes can either be seen as an iron clad law of the universe, or as a tautological fallacy that begs the question of materialism. On the one hand, adherents to the first worldview can say that if there were any non-physical cause to a physical effect, we would by definition see the effect of that cause as physical. There is simply no room in the laws of physics for magical, non-local forces as the tiniest deviation in experimental data would show up for us as a paradigm shifting event in the history of physics.
On the other hand, adherents of the second view can either point to a theological transcendence of physics which is miraculous and is beyond physical explanation, or they can question the suppositions of causal closure as biased from the start. Since all physical measurements are made using physical instruments, any metaphysical contact might be minimized or eliminated.
It could be argued that physics is like wearing colored glasses, so that rather than proving that all phenomena can be reduced to ‘red images’, all that it proves is that working with the public-facing exteriors of nature yields a predictably public-facing exterior logic. Rather than diminishing the significance of private-facing phenomenal experience, it may be physics which is the diminished ‘tip of the iceberg’, with the remaining bulk of the iceberg being a transphysical, transpersonal firmament. Just as we observe the ability of our own senses to ‘fill-in’ gaps in perceptual continuity, it could be that physics has a similar plasticity. Relativity may extend beyond physics, such that physics itself is a curvature of deeper conscious/metaphysical attractors.
Another alternative to assuming causal closure is to see the different levels of description of physics as semi-permeable to causality. Our bodies are made of living cells, but on that layer of description ‘we’ don’t exist. A TV show doesn’t ‘exist’ on the level of illuminated pixels or digital data in a TV set. Each level of description is defined by a scope and scale of perception which is only meaningful on that scale. If we apply strong causal closure, there would be no room for any such thing as a level of description or conscious perspective. Physics has no observers, unless we smuggle them in as unacknowledged voyeurs from our own non-physically-accounted-for experience.
To my mind, it’s difficult to defend causal closure in light of recent changes in astrophysics where the vast bulk of the universe’s mass has been suddenly re-categorized as dark energy and dark matter. Not only could these newly minted phenomena be ‘dark’ because they are metaphysical, but they show that physics cannot be counted on to limit itself to any particular definition of what counts as physics.
2) Here’s a passage about Bell’s Theorem which says it better than I could:
“Bell’s Theorem, expressed in a simple equation called an ‘inequality’, could be put to a direct test. It is a reflection of the fact that no signal containing any information can travel faster than the speed of light. This means that if hidden-variables theory exists to make quantum mechanics a deterministic theory, the information contained in these ‘variables’ cannot be transmitted faster than light. This is what physicists call a ‘local’ theory. John Bell discovered that, in order for Bohm’s hidden-variable theory to work, it would have to be very badly ‘non-local’ meaning that it would have to allow for information to travel faster then the speed of light. This means that, if we accept hidden-variable theory to clean up quantum mechanics because we have decided that we no longer like the idea of assigning probabilities to events at the atomic scale, we would have to give up special relativity. This is an unsatisfactory bargain.” Archive of Astronomy Questions and Answers
From an other article ( Physics: Bell’s theorem still reverberates )
As Bell proved in 1964, this leaves two options for the nature of reality. The first is that reality is irreducibly random, meaning that there are no hidden variables that “determine the results of individual measurements”. The second option is that reality is ‘non-local’, meaning that “the setting of one measuring device can influence the reading of another instrument, however remote”.
Bell’s inequality could go either way then. Nature could be random and local, non-local and physical, or non-local and metaphysical…or perhaps all of the above. We don’t have to conceive of ‘vital essences’ in the sense of dark physics that connects our private will to public matter and energy, but we can see instead that physics is a masked or spatiotemporally diffracted reflection of a nature that is not only trans-physical, but perhaps trans-dimensional and trans-ontological. It may be that beneath every fact is a kind of fiction.
If particles are, as Fritjof Capra said “tendencies to exist”, then the ground of being may be conceived of as a ‘pretend’-ency to exist. This makes sense to me, since we experience with our own imagination a constant stream of interior rehearsals for futures that might never be and histories that probably didn’t happen the way that we think. Rather than thinking of our own intellect as purely a vastly complex system on a biochemical scale, we may also think of it as a vastly simple non-system, like a monad, of awareness which is primordial and fundamentally inseparable from the universe as a whole.
3) Church-Turing Thesis has to do with computability and whether all functions of mathematics can be broken down to simple arithmetic operations. If we accept it as true, then it can be reasoned through the first worldview that since the brain is physical, and physics can be modeled mathematically, then there should be no reason why a brain cannot be simulated as a computer program.
There are some possible problems with this:
a) The brain and its behavior may not be physically complete. There are a lot of theories about consciousness and the brain. Penrose and Hameroff’s quantum consciousness postulates that consciousness depends on quantum computations within cytoskeletal structures called microtubules. In that case, what the brain does may not be entirely physically accessible. According to Orch OR, the brain’s behavior can be caused ultimately by quantum wavefunction collapse through large scale Orchestrated Objective Reductions. Quantum events of this sort could not be reproduced or measured before they happen, so there is no reason to expect that a computer modeling of a brain would work.
b) Consciousness may not be computable. Like Bell’s work in quantum mechanics, mathematics took an enigmatic turn with Gödel’s Incompleteness Theorem. Long story short, Gödel showed that there are truths within any axiomatic system which cannot be proved without reaching outside of that system. Formal logic is incomplete. Like Bell’s inequality, incompleteness can take us into a world where either epistemology breaks down completely and we have no way of ever knowing whether what we know is true, or we are compelled to consider that logic itself is dependent upon a more transcendent, Platonic realm of arithmetic truth.
This leads to another question about whether even this kind of super-logical truth is the generator of consciousness or whether consciousness of some sort is required a priori to any formulation of ‘truth’. To me, it makes no sense for there to be truths which are undetectable, and it makes no sense for an undetectable truth to develop sensation to detect itself, so I’m convinced that arithmetic truth is a reduction of the deeper ground of being, which is not only logical and generic, but aesthetic and proprietary. Thinking is a form of feeling, rather than the other way around. No arithmetic code can produce a feeling on its own.
c) Computation may not support awareness. Those who are used to the first worldview may find this prospect to be objectionable, even offensive to their sensibilities. This in itself is an interesting response to something which is supposed to be scientific and unsentimental, but that is another topic. Sort of. What is at stake here is the sanctity of simulation. The idea that anything which can be substituted with sufficiently high resolution is functionally identical to the original is at the heart of the modern technological worldview. If you have a good enough cochlear implant, it is thought, of course it would be ‘the same as’ a biological ear. By extension, however, that reasoning would imply that a good enough simulation of glass of water would be drinkable.
It seems obvious that no computer generated image of water would be drinkable, but some would say that it would be drinkable if you yourself also existed in that simulation. Of course, if that were the case, anything could be drinkable, including the sky, the alphabet, etc, whatever was programmed to be drinkable in that sim-world.
We should ask then, since computational physics is so loose and ‘real’ physics is so rigidly constrained, does that mean that physics and computation are a substance dualism where they cannot directly interact, or does it mean that physics is subsumed within computation, so that our world is only one of a set of many others, or every other possible world (as in some MWI theories).
d) Computation may rely on ungrounded symbols. Another topic that gets a lot of people very irritated is the line of philosophical questioning that includes Searle’s Chinese Room and Leibniz Mill Argument. If you’ve read this far, you’re probably already familiar with these, but the upshot is that parsimony compels us to question that any such thing as subjective experience could be plausible in a mechanical system. Causal closure is seen not only to prohibit metaphysics, but also any chance of something like consciousness emerging through mechanical chain reactions alone.
Church-Turing works in the opposite way here, since all mechanisms can be reduced to computation and all computation can be reduced to arithmetic steps, there is no way to justify extra-arithmetic levels of description. If we say that the brain boils down to assembly language type transactions, then we need a completely superfluous and unsupportable injection of brute emergence to inflate computation to phenomenal awareness.
The symbol grounding problem shows how symbols can be manipulated ‘apathetically’ to an arbitrary degree of sophistication. The passing of the Turing test is meaningless ultimately since it depends on a subjective appraisal of a distant subjectivity. There isn’t any logical reason why a computer program to simulate a brain or human communication would not be a ‘zombie’, relying on purely quantitative-syntactic manipulations rather than empathetic investment. Since we ourselves can pretend to care, without really caring, we can deduce that there may be no way to separate out a public-facing effect from a private-facing affect. We can lie and pretend and say words that we don’t mean, so we cannot naively assume that just because we build a mouth which parrots speech that meaning will spontaneously arise in the mouth, or the speech, or the ‘system’ as a whole.
In the end, I think that we can’t have it both ways. Either we say that consciousness is intrinsic and irreducible, or we admit that it makes no sense as a product of unconscious mechanisms.
The question of whether the internet could come to life is, to me, only different from the question of whether Pinocchio could become a real boy in that there is a difference in degree. Pinocchio is a three dimensional puppet which is animated through a fourth dimension of time. The puppeteer would add a fifth dimension to that animation, lending their own conscious symbol-grounding to the puppet’s body intentionally. The puppet has no awareness of its own. What is different about an AI is that it would take the fifth dimensional control in-house as it were.
It gets very tricky here, since our human experience has always been with other beings that are self-directed to be living beings which are conscious or aware to some extent. We have no precedent in our evolution to relate to a synthetic entity which is designed explicitly to simulate the responses of a living creature. So far, what we have seen does not support, in my opinion, any fundamental progress. Pinocchio has many voices and outfits now, but he is still wooden. The uncanny valley effect gives us a glimpse in how we are intuitively and aesthetically repulsed by that which pretends to be alive. At this point, my conclusion is that we have nothing to fear from technology developing its own consciousness, no more than we have of books beginning to write their own stories. There is, however, a danger of humans abdicating their responsibility to AI systems, and thereby endangering the quality of human life. Putting ‘unpersons’ in charge of the affairs of real people may have dire consequences over time.
Recent Comments