Integrated connectionist models: building AI systems on subsymbolic foundations

Symbolic artificial intelligence is motivated by the hypothesis that symbol manipulation is both necessary and sufficient for intelligence. In symbolic systems, knowledge is encoded in terms of explicit symbolic structures, and inferences are based on handcrafted rules that sequentially manipulate these structures. Such systems have been quite successful, for example, in modeling in-depth natural language processing, episodic memory, and symbolic problem solving. However, much of the inferencing for everyday natural language understanding appears to take place immediately, without conscious control, apparently based on associations with past experience. This type of reasoning is difficult to model in the symbolic framework. In contrast, subsymbolic (distributed connectionist) networks represent knowledge in terms of correlations, coded in the weights of the network. For a given input, the network computes the most likely answer given its past experience. A number of human-like information processing properties such as learning from examples, context sensitivity, generalization, robustness of behavior, and intuitive reasoning emerge automatically in subsymbolic systems. The major motivation for subsymbolic AI, therefore, is to give a better account for cognitive phenomena that are statistical, or intuitive, in nature.<<ETX>>

[1]  Terrence J. Sejnowski,et al.  Parallel Networks that Learn to Pronounce English Text , 1987, Complex Syst..

[2]  David S. Touretzky Connectionism and Compositional Semantics , 1989 .

[3]  N. E. Sharkey,et al.  A PDP learning approach to natural language understanding , 1989 .

[4]  J. Fodor The Modularity of mind. An essay on faculty psychology , 1986 .

[5]  Risto Miikkulainen,et al.  Natural Language Processingwith Modular Neural Networks and Distributed Lexicon , 1991 .

[6]  Janet L. Kolodner,et al.  Retrieval and organizational strategies in conceptual memory: a computer model , 1980 .

[7]  Risto Miikkulainen A PDP Architecture For Processing Sentences With Relative Clauses , 1990, COLING.

[8]  Ajay Naresh Jain,et al.  Parsec: a connectionist learning architecture for parsing spoken language , 1992 .

[9]  Michael Lebowitz,et al.  Generalization and memory in an integrated understanding system , 1980 .

[10]  Richard J. Brown Neuropsychology Mental Structure , 1989 .

[11]  Richard Edward Cullingford,et al.  Script application: computer understanding of newspaper stories. , 1977 .

[12]  G. Reeke The society of mind , 1991 .

[13]  Allen Newell,et al.  Physical Symbol Systems , 1980, Cogn. Sci..

[14]  A. Caramazza Some aspects of language processing revealed through the analysis of acquired aphasia: the lexical system. , 1988, Annual review of neuroscience.

[15]  C. P. Dolan Tensor manipulation networks: connectionist and symbolic approaches to comprehension, learning, and planning , 1989 .

[16]  Roger C. Schank,et al.  Scripts, plans, goals and understanding: an inquiry into human knowledge structures , 1978 .

[17]  Allen Newell,et al.  SOAR: An Architecture for General Intelligence , 1987, Artif. Intell..

[18]  Geoffrey E. Hinton,et al.  A general framework for parallel distributed processing , 1986 .

[19]  P. Smolensky On the proper treatment of connectionism , 1988, Behavioral and Brain Sciences.

[20]  G. Kane Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol 1: Foundations, vol 2: Psychological and Biological Models , 1994 .

[21]  James L. McClelland,et al.  Parallel Distributed Processing: Explorations in the Microstructure of Cognition : Psychological and Biological Models , 1986 .

[22]  Jordan B. Pollack,et al.  Recursive Distributed Representations , 1990, Artif. Intell..

[23]  Stefan Wermter,et al.  A Hybrid Symbolic/Connectionist Model for Noun Phrase Understanding , 1989 .

[24]  E. Warrington,et al.  Cognitive Neuropsychology: A Clinical Introduction , 1990 .

[25]  Jerome A. Feldman,et al.  Neural Representation of Conceptual Knowledge. , 1986 .

[26]  Risto Miikkulainen,et al.  Script Recognition with Hierarchical Feature Maps , 1992 .

[27]  Geoffrey E. Hinton Tensor Product Variable Binding and the Representation of Symbolic Structures in Connectionist Systems , 1991 .

[28]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[29]  Risto Miikkulainen,et al.  Subsymbolic natural language processing - an integrated model of scripts, lexicon, and memory , 1993, Neural network modeling and connectionism.

[30]  Michael G. Dyer,et al.  Argument representation for editorial text , 1990, Knowl. Based Syst..

[31]  Jeffrey L. Elman,et al.  Finding Structure in Time , 1990, Cogn. Sci..

[32]  E. HintonGeoffrey Mapping part-whole hierarchies into connectionist networks , 1990 .

[33]  John F. Reeves,et al.  Computational morality: a process model of belief conflict and resolution for story understanding , 1991 .

[34]  Geoffrey E. Hinton Mapping Part-Whole Hierarchies into Connectionist Networks , 1990, Artif. Intell..

[35]  Teuvo Kohonen,et al.  Self-Organization and Associative Memory, Third Edition , 1989, Springer Series in Information Sciences.

[36]  Valeriy I. Nenov Perceptually grounded language acquisition: a neural/procedural hybrid model , 1992 .

[37]  Mark F. St. John,et al.  The Story Gestalt: A Model of Knowledge-Intensive Processes in Text Comprehension , 1992, Cogn. Sci..

[38]  James L. McClelland,et al.  Learning and Applying Contextual Constraints in Sentence Comprehension , 1990, Artif. Intell..

[39]  R. Miikkulainen,et al.  A modular neural network architecture for sequential paraphrasing of script-based stories , 1989, International 1989 Joint Conference on Neural Networks.