Archive

Posts Tagged ‘Searle’

How to Tell if Your p-Zombie has Blindsight, Falling in a Chinese Forest

January 26, 2015 Leave a comment

In order to make the question of philosophical zombiehood more palatable, it is a good idea to first reduce the scope of the question from consciousness in general to a particular kind of awareness, such as visual awareness or sight.

consciousness (general awareness)     |    particular awareness (sight)

Building on this analogy, we can now say that an equivalent of a philosophical zombie (p-Zombie) as far as sight is concerned might be a person who is blind, but uses a seeing eye dog to ‘see’.

As with blindsight, there seeing eye dog provides a case where a person is informed about optical conditions in the world, but without the benefit of first person phenomenal experience of seeing. The blind person sees no visual qualia – no colors, no shapes, no brightness or contrast, yet from all appearances they may be indistinguishable from a sighted person who is walking their dog.

Staying with the analogy to consciousness in general:

(a p-Zombie) is to (a Blind person w/ guide dog)
as a
(Conscious subject) is to (a person walking dog)

Some might object to this analogy, saying that because a p-Zombie is defined as appearing and behaving in every way like a conscious subject, and a sighted person walking their dog might not always act the same as a blind person with a guide dog. It’s true, in the dark, the sighted person would be at a disadvantage avoiding obstacles in their path, while the blind person might not be affected.

This, however, is a red herring that arises from the hasty definition of philosophical zombie as one who appears identical in every way to a conscious subject, rather than one who can appear identical in many ways. Realistically, there may not even be a way to know whether there is any such thing as a set of ways that a conscious being behaves. A conscious being can pretend to be unconscious, so right away this is a problem that may not resolvable.

Each conscious being is different at different times, so that presuming that consciousness in general has a unique signature that can be verified is begging the question. Even if two simple things seemed to be identical for some period of time, there might be a chance that their behavior will diverge eventually, either by itself, or in response to some condition that brings out a latent difference.

So let’s forget about the strong formulation of p-Zombie and look instead at the more sensible weak formulation of w-Zombie as an unconscious object which can be reliably mistaken for a conscious subject under some set of conditions, for some audience, for some period of time.

By this w-Zombie standard, the guide dog’s owner makes a good example of how one system (blind person + sighted dog) can be functionally identical to another (sighted person + sighted dog), without any phenomenal property (blind person gaining sight) emerging. As with the Chinese Room, the resulting output of the room does not reflect an internal experience, and the separate functions which produce the output do not transfer their experience to the ‘system’ as a whole.

From the guide dog analogy, we can think about bringing the functionality of the dog ‘in house’. The dog can be a robot dog, which can then be miniaturized as a brain implant. In this way a blind person could have the functionality of a guide dog’s sight without seeing anything. It would be interesting to see how the recipient of such an implant’s brain would integrate the input from it. From neuroscientific studies that have been conducted so far, which shows that in blind people’s brains, tactile stimulation such as reading Braille, shows up in the visual cortex. I would expect that the on-board seeing-eye dog would similarly show up, at least in part, in the regions of the brain normally associated with vision, so that we have a proof of concept of a w-Zombie. If we had separate digitized animals to handle each of our senses, we could theoretically create an object which behaves enough like a human subject, even within the brain, that it would qualify as a weak p-Zombie.

As a final note, we can apply this understanding to the oft misquoted philosophical saw ‘If a tree falls in a forest…’. Instead of asking whether a sound exists without anyone to hear it, we can reverse it and ask whether someone who is awakened from a dream of a tree falling in the forest which nobody else heard, was there a sound?

The answer has to be yes. The subjective experience of a sound was still experienced even though there is no other evidence of it.  In the same way, we can dream of seeing sunlight without our eyes receiving photons from the sun. We can say that seeing light or hearing sound does not require a concurrent physical stimulation but we cannot say that physics requires any such qualia as seeing light or hearing sound. To the contrary, we have shown above that there is no empirical support for the idea that physical functions could or would automatically generate qualia.Thus, the case for materialism and functionalism is proved in the negative, and the fallacy of the systems reply to Searle is revealed.

And if the cloud bursts, thunder in your ear
You shout and no one seems to hear.
And if the band you’re in starts playing different tunes
I’ll see you on the dark side of the moon. – Pink Floyd

m-zombie: A Challenge to Computationalism

March 16, 2014 Leave a comment

Have a look at this quick video, or get the idea from this gif:

Since the VCR can get video feedback of itself, is there any computational reason why this doesn’t count as a degree of self awareness?

Turning the Systems Reply to the Chinese Room on its head, I submit that if we consider the Chinese room to be an intelligent *system*, then we must also consider the system of VCR+camera pointed at itself to be a viable AGI as well.

The correlation of corrupted video feed with the actual physical attacks on the system must be construed as awareness, especially since the video effects are unique and specifically correlated so as to present a complex vocabulary of responses. If functionalism rejects the evidence of this correlation as being experienced qualitatively, then it must posit the existence of an m-zombie (machine zombie), in which the system unintentionally mimics the responses expected from sentient computations from a sub-computational level.

Questioning the Sufficiency of Information

January 12, 2014 2 comments
Better Than The Chinese Room

Searle’s “Chinese Room” thought experiment tends to be despised by strong AI enthusiasts, who seem to take issue with Searle personally because of it. Accusing both the allegory and the author of being stupid, the Systems Reply is the one offered most often. The man in the room may not understand Chinese, but surely the whole system, including book of translation, must be considered to understand Chinese.

Here then is simpler and more familiar example of how computation can differ from natural understanding which is not susceptible to any mereological Systems argument.

If any of you have use passwords which are based on a pattern of keystrokes rather than the letters on the keys, you know that you can enter your password every day without ever knowing what it is you are typing (something with a #r5f^ in it…?).

I think this is a good analogy for machine intelligence. By storing and copying procedures, a pseudo-semantic analysis can be performed, but it is an instrumental logic that has no way to access the letters of the ‘human keyboard’. The universal machine’s keyboard is blank and consists only of theoretical x,y coordinates where keys would be. No matter how good or sophisticated the machine is, it will still have no way to understand what the particular keystrokes “mean” to a person, only how they fit in with whatever set of fixed possibilities has been defined.

Taking the analogy further, the human keyboard only applies to public communication. Privately, we have no keys to strike, and entire paragraphs or books can be represented by a single thought. Unlike computers, we do not have to build our ideas up from syntactic digits. Instead the public-facing computation follows from the experienced sense of what is to be communicated in general, from the top down, and the inside out.

 

The Scale of Digital

How large does a digital circle have to be before the circumference seems like a straight line?

Digital information has no scale or sense of relation. Code is code. Any rendering of that code into a visual experience of lines and curves is a question of graphic formatting and human optical interaction. With a universe that assumes information as fundamental, the proximity-dependent flatness or roundness of the Earth would have to be defined programmatically. Otherwise, it is simply “the case” that a person is standing on the round surface of the round Earth. Proximity is simply a value with no inherent geometric relevance.

When we resize a circle in Photoshop, for instance, the program is not transforming a real shape, it is erasing the old digital circle and creating a new, unrelated digital circle. Like a cartoon, the relation between the before and after, between one frame and the “next” is within our own interpretation, not within the information.

Strong AI Position

August 17, 2013 Leave a comment

Strong AI Position

It may not be possible to imitate a human mind computationally, because awareness may be driven by aesthetic qualities rather than mathematical logic alone. The problem, which I call the Presentation Problem, is what several outstanding issues in science and philosophy have in common, namely the Explanatory Gap, the Hard Problem, the Symbol Grounding problem, the Binding problem, and the symmetries of mind-body dualism. Underlying all of these is the map-territory distinction; the need to recognize the difference between presentation and representation.

Because human minds are unusual phenomena in that they are presentations which specialize in representation, they have a blind spot when it comes to examining themselves. The mind is blind to the non-representational. It does not see that it feels, and does not know how it sees. Since its thinking is engineered to strip out most direct sensory presentation in favor of abstract sense-making representations, it fails to grasp the role of presence and aesthetics in what it does. It tends toward overconfidence in the theoretical.The mind takes worldly realism for granted on one hand, but conflates it with its own experiences as a logic processor on the other. It’s a case of the fallacy of the instrument, where the mind’s hammer of symbolism sees symbolic nails everywhere it looks. Through this intellectual filter, the notion of disembodied algorithms which somehow generate subjective experiences and objective bodies, (even though experiences or bodies would serve no plausible function for purely mathematical entities) becomes an almost unavoidably seductive solution.

So appealing is this quantitative underpinning for the Western mind’s cosmology, that many people (especially Strong AI enthusiasts) find it easy to ignore that the character of mathematics and computation reflect precisely the opposite qualities from those which characterize consciousness. To act like a machine, robot, or automaton, is not merely an alternative personal lifestyle, it is the common style of all unpersons and all that is evacuated of feeling. Mathematics is inherently amoral, unreal, and intractably self-interested – a windowless universality of representation.

A computer has no aesthetic preference. It makes no difference to a program whether its output is displayed on a monitor with millions of colors, or buzzing out of speaker, or streaming as electronic pulses over a wire. This is the primary utility of computation. This is why digital is not locked into physical constraints of location. Since programs don’t deal with aesthetics, we can only use the program to format values in such a way that corresponds with the expectations of our sense organs. That format of course, is alien and arbitrary to the program. It is semantically ungrounded data, fictional variables.

Something like the Mandelbrot set may look profoundly appealing to us when it is presented optically as plotted as colorful graphics, but the same data set has no interesting qualities when played as audio tones. The program generating the data has no desire to see it realized in one form or another, no curiosity to see it as pixels or voxels. The program is absolutely content with a purely quantitative functionality – with algorithms that correspond to nothing except themselves.

In order for the generic values of a program to be interpreted experientially, they must first be re-enacted through controllable physical functions. It must be perfectly clear that this re-enactment is not a ‘translation’ or a ‘porting’ of data to a machine, rather it is more like a theatrical adaptation from a script. The program works because the physical mechanisms have been carefully selected and manufactured to match the specifications of the program. The program itself is utterly impotent as far as manifesting itself in any physical or experiential way. The program is a menu, not a meal. Physics provides the restaurant and food, subjectivity provides the patrons, chef, and hunger. It is the physical interactions which are interpreted by the user of the machine, and it is the user alone who cares what it looks like, sounds like, tastes like etc. An algorithm can comment on what is defined as being liked, but it cannot like anything itself, nor can it understand what anything is like.

If I’m right, all natural phenomena have a public-facing mechanistic range and a private-facing animistic range. An algorithm bridges the gap between public-facing, space-time extended mechanisms, but it has no access to the private-facing aesthetic experiences which vary from subject to subject. By definition, an algorithm represents a process generically, but how that process is interpreted is inherently proprietary.

I can't believe it!

Problems of today, Ideas for tomorrow

Rationalising The Universe

one post at a time

Conscience and Consciousness

Academic Philosophy for a General Audience

yhousenyc.wordpress.com/

Exploring the Origins and Nature of Awareness

DNA OF GOD

BRAINSTORM- An Evolving and propitious Synergy Mode~!

Musings and Thoughts on the Universe, Personal Development and Current Topics

This is a blog where I explore spiritual and personal development themes and ideas. © JAMES MICHAEL J. LOVELL, MUSINGS AND THOUGHTS ON THE UNIVERSE, PERSONAL DEVELOPMENT AND CURRENT TOPICS, 2016-2020, ALL RIGHTS RESERVED.

Paul's Bench

Ruminations on philosophy, psychology, life

This is not Yet-Another-Paradox, This is just How-Things-Really-Are...

For all dangerous minds, your own, or ours, but not the tv shows'... ... ... ... ... ... ... How to hack human consciousness, How to defend against human-hackers, and anything in between... ... ... ... ... ...this may be regarded as a sort of dialogue for peace and plenty for a hungry planet, with no one left behind, ever... ... ... ... please note: It may behoove you more to try to prove to yourselves how we may really be a time-traveler, than to try to disprove it... ... ... ... ... ... ...Enjoy!

Creativity✒📃😍✌

“Don’t try to be different. Just be Creative. To be creative is different enough.”

absolutephilosophy

An idealistic blog where those who are searching/wandering/questioning can find an absolute qualia.

zumpoems

Zumwalt Poems Online

The Traditionalist

Revolt Against The Modern World

dhamma footsteps

postcards from the present moment

chandleur

Bagatelle

OthmanMUT

Observational Tranquillity.

Gray Matters

Traversing the blood-brain-barrier.

Writings By Ender

The Writer's Adventure