Kris Carlson

Just another WordPress.com weblog

My favorite passage by Dave Waltz

Dave Waltz was a friend and a mentor. He passed away in March 2012 from a glioblastoma. There was a wonderful symposium in his honor held over the weekend at Brandeis University and ably organized by Prof. Jordan Pollack. I send the following excerpt from Dave’s writings to a few friends who knew him last week.

…I dispute the heuristic search metaphor, the relationship between physical symbol systems and human cognition, and the nature and “granularity” of the units of thought. The physical symbol system hypothesis, also long shared by AI researchers, is that a vocabulary close to natural language (English, for example, perhaps supplemented by previously unnamed categories and concepts) would be sufficient to express all concepts that eve need to be expressed. My belief is that natural-language-like terms are, for some concepts, hopelessly coarse and vague, and that much finer, “subsymbolic” distinctions must be made, especially for encoding sensory inputs. At the same time, some mental units (for example, whole situations or events—often remembered as mental images) seem to be important carriers of meaning that may not be reducible to tractable structure of words or wordlike entities. Even worse, I believe that words are not in any case carriers of complete meanings but are instead more like index terms or cues that a speaker uses to induce e listener to extract shared memories and knowledge. The degree of detail and number of units needed to express the speaker’s knowledge and intent and the hearer’s understanding are vastly greater than the number of words used to communicate. In this sense language may be like the game of charades: the speaker transmits relatively little, and the listener generates understanding through the synthesis of the memory  items evoked by the speaker’s clues. Similarly, I believe that the words that seem widely characteristic of human streams of consciousness do not themselves constitute thought; rather, they represent a projection of our thoughts onto our speech-production faculties. Thus, for example, we may feel happy or embarrassed without ever forming those words, or we may solve a problem by imagining a diagram without words or with far too few words to specify the diagram.

Waltz, David L., The Prospects for Building Truly Intelligent Machines, Proceedings of the American Academy of Arts and Sciences (Daedalus, Winter 1988), V 117 N 1, pg 197.

September 24, 2012 Posted by | Artificial Intelligence, Neuroscience | Leave a comment

   

%d bloggers like this: