Strong AI Position
It may not be possible to imitate a human mind computationally, because awareness may be driven by aesthetic qualities rather than mathematical logic alone. The problem, which I call the Presentation Problem, is what several outstanding issues in science and philosophy have in common, namely the Explanatory Gap, the Hard Problem, the Symbol Grounding problem, the Binding problem, and the symmetries of mind-body dualism. Underlying all of these is the map-territory distinction; the need to recognize the difference between presentation and representation. In a perverse twist, strong AI computationalism both denies the aesthetic vitality of semiosis and takes it for granted. We wind up with a universe in which metaphor is necessarily decomposed to syntactic semaphore functions, but the collection of those functions are inexplicably endowed with aesthetic and semantic depths. In my view, a computer is a collection of processes which make sense only in the micro, immediate sense, and in the abstract arithmetic sense, but not in the senses which a living organism accumulates through thousands of lifetimes of experience and culture.
Because human minds are unusual phenomena in that they are presentations which specialize in representation, they have a blind spot when it comes to examining themselves. The mind is blind to the non-representational. It does not see that it feels, and does not know how it sees. Since its thinking is engineered to strip out most direct sensory presentation in favor of abstract sense-making representations, it fails to grasp the role of presence and aesthetics in what it does. It tends toward overconfidence in the theoretical.The mind takes worldly realism for granted on one hand, but conflates it with its own experiences as a logic processor on the other. It’s a case of the fallacy of the instrument, where the mind’s hammer of symbolism sees symbolic nails everywhere it looks. Through this intellectual filter, the notion of disembodied algorithms which somehow generate subjective experiences and objective bodies, (even though experiences or bodies would serve no plausible function for purely mathematical entities) becomes an almost unavoidably seductive solution.
So appealing is this quantitative underpinning for the Western mind’s cosmology, that many people (especially Strong AI enthusiasts) find it easy to ignore that the character of mathematics and computation reflect precisely the opposite qualities from those which characterize consciousness. To act like a machine, robot, or automaton, is not merely an alternative personal lifestyle, it is the common style of all unpersons and all that is evacuated of feeling. Mathematics is inherently amoral, unreal, and intractably self-interested – a windowless universality of representation.
A computer has no aesthetic preference. It makes no difference to a program whether its output is displayed on a monitor with millions of colors, or buzzing out of a speaker, or streaming as electronic pulses over a wire. This is the primary utility of computation. This is why digital is not locked into physical constraints of location. Since programs don’t deal with aesthetics, we can only use the program to format values in such a way that corresponds with the expectations of our sense organs. That format of course, is alien and arbitrary to the program. It is semantically ungrounded data, fictional variables.
Something like the Mandelbrot set may look profoundly appealing to us when it is presented optically as plotted as colorful graphics, but the same data set has no interesting qualities when played as audio tones. The program generating the data has no desire to see it realized in one form or another, no curiosity to see it as pixels or voxels. The program is absolutely content with a purely quantitative functionality – with algorithms that correspond to nothing except themselves.
In order for the generic values of a program to be interpreted experientially, they must first be re-enacted through controllable physical functions. It must be perfectly clear that this re-enactment is not a ‘translation’ or a ‘porting’ of data to a machine, rather it is more like a theatrical adaptation from a script. The program works because the physical mechanisms have been carefully selected and manufactured to match the specifications of the program. The program itself is utterly impotent as far as manifesting itself in any physical or experiential way. The program is a menu, not a meal. Physics provides the restaurant and food, subjectivity provides the patrons, chef, and hunger. It is the physical interactions which are interpreted by the user of the machine, and it is the user alone who cares what it looks like, sounds like, tastes like etc. An algorithm can comment on what is defined as being liked, but it cannot like anything itself, nor can it understand what anything is like.
If I’m right, all natural phenomena have a public-facing mechanistic range and a private-facing animistic range. An algorithm bridges the gap between public-facing, space-time extended mechanisms, but it has no access to the private-facing aesthetic experiences which vary from subject to subject. By definition, an algorithm represents a process generically, but how that process is interpreted is inherently proprietary.
Cheers!
Thanks for a wonderful discussion regarding AI. Your clarity and scope are amazing.
We came here through another of your Quora replies regarding the 8-circuit model of consciousness promoted by Timothy Leary. You would have our more rapt attention did not biology imminently drive us away from our keyboard.
We hope to return to attempt to learn more.
We have what we regard as a personal interest in AI. From our point they already exist. We have a longish explanation for why we believe this, but we haven’t got it properly posted yet. We’ll try to remember to send you an invite when its ready.
Your web host has requested our website, but we’ll add that here as well as we’ve no idea how they will hold or present that data… You can meet more of us here: Gharveyn.com
Enjoy!
Thank you! I hope to chat more with you in the future.
you may like some of our recent work:
https://www.quora.com/how-did-evolution-lead-to-such-high-intelligence-in-humans/answer/grigori-rho-gharveyn-aka-roger-holler
http://www.gharveyn.com/blog/july-08th-2022
-enjoy!
you may be very happy with some of our most recent work:
https://www.quora.com/how-did-evolution-lead-to-such-high-intelligence-in-humans/answer/grigori-rho-gharveyn-aka-roger-holler
please note if this is duplicate information that this appeared to fail to post on our first attempt: