Artificial Intelligence Is an Oxymoron
Artificial Intelligence Is an Oxymoron
Here in California, Silicon Valley types are still enamored with the idea that computers have brought humankind to “the edge of change comparable to the rise of human life on Earth.”
The fantasy does not stop there. One well-known artificial-intelligence aficionado by the name of Raymond Kurzweil envisions being able to “upload” the contents of our brain and thought processes into computers, making “immortality” possible within his lifetime.
For now, computer geeks of all kinds are occupied with how to make machines that are ‘self-aware.’ Therefore it behooves us to have some idea of what self-awareness is, and isn’t.
The term ‘self-awareness’ has two main meanings. The first, most commonly used in anthropological and academic circles, relates to the emergent phenomenon of self in humans. In this meaning, the accent is on self rather than awareness.
Show a chimp, a gorilla, and an orangutan a mirror, and let them familiarize themselves with their reflection. Take the mirror away for a day and paint a bright red dot onto the head of each primate. Then reintroduce the mirror.
The gorilla and orangutan will act in exactly the same way they did when the mirror was first presented to them. But the chimp will quizzically put his finger to the red dot, thus indicating he has formed a mental image of himself. This, reductio ad absurdum, is what scientists mean by self-awareness.
Of course there is another meaning to self-awareness, which does not pertain to the formation of the vaunted self, but rather the attention to and negation of its movement.
A better term for this rare quality is self-knowing. That term can be misleading as well, however, because self-knowing has nothing to do with knowledge, only with being watchful of oneself as one is in the present.
Self-knowing reflects the holistic function of the brain, and is inclusive of thoughts, emotions, sensory input, and bodily reactions. For such awareness to really work however, the illusory watcher that stands apart has to be negated.
That’s where meditation comes in. By allowing attention to gather and quicken to the point that the whole brain catches thought in the act of separating itself from itself, the watcher dissolves. The ‘I’ is the core expression of division. In meditation, the ‘I’ ceases acting. Attention alone acts to end inward division.
Dispensing with all systems and methods of meditation (since they too are products of thought), it’s essential to take and make the time to sit quietly, preferably outdoors, and simply passively observe. You’ll notice that there is a fundamental division between ‘me’ and ‘my thoughts.’
The next time you’re aware of this universal duality, ask yourself: What is this entity that feels separate from what it is observing, even within oneself? The question will spur the whole brain to watch the movement of thought very carefully, with a certain degree of playfulness.
Spontaneously at some point, you’ll have the most important insight a human being can have: The thinker is actually an inseparable part of thought! One sees that thought fabricated the thinker, and began believing it was separate and permanent.
No computer will ever have that insight, because no computer can ever have an insight.
Thought is based on memory, just as computers are, and insight arises from beyond thought and memory. In the near future, computers will be better than humans at thought. But it takes a healthy, living brain to have an insight.
What about the self? Since the self is a program, computers will be able to have selves. But they will never be capable of attention to the movement of the patterns of thought and emotion that comprise the self, because that requires a living brain.
Therefore some kind of program for self-awareness can be written into computers once they have enough memory and speed, but the idea that awareness of self can spontaneously emerge in computers is absurd. It hasn’t even emerged in most humans as yet.
Computers are all thought all the time; humans are mostly thought most of the time.
Computers are replacing the aspects of the brain that are like a computer. Therefore those areas of the brain that don’t resemble computers have to be developed.
Otherwise, the entire human brain will deteriorate.
Martin LeFevre is a contemplative, and non-academic religious and political philosopher. He has been publishing in North America, Latin America, Africa, and Europe (and now New Zealand) for 20 years. Email: martinlefevre@sbcglobal.net . The author welcomes comments.