Connectionist Architectures for Artificial Intelligence

A number of researchers have begun exploring the use of massively parallel architectures in an attempt to get around the limitations of conventional symbol processing. Many of these parallel architectures are connectionist: The system's collection of permanent knowledge is stored as a pattern of connections or connection strengths among the processing elements, so the knowledge directly determines how the processing elements interact rather that sitting passively in a memory, waiting to be looked at by the CPU. Some connectionist schemes use formal, symbolic representations, while others use more analog approaches. Some even develop their own internal representations after seeing examples of the patterns they are to recognize or the relationships they are to store. Connectionism is somewhat controversial in the AI community. It is new, still unproven in large-scale practical applications, and very different in style from the traditional AI approach. The authors have only begun to explore the behavior and potential of connectionist networks. In this article, the authors describe some of the central issues and ideas of connectionism, and also some of the unsolved problems facing this approach. Part of the motivation for connectionist research is the possible similarity in function between connectionist networks and the neutral networksmore » of the human cortex, but they concentrate here on connectionism's potential as a practical technology for building intelligent systems.« less

[1]  A. A. Mullin,et al.  Principles of neurodynamics , 1962 .

[2]  Scott E. Fahlman Design Sketch for a Million-Element NETL Machine , 1980, AAAI.

[3]  Scott E. Fahlman,et al.  NETL: A System for Representing and Using Real-World Knowledge , 1979, CL.

[4]  J J Hopfield,et al.  Neural networks and physical systems with emergent collective computational abilities. , 1982, Proceedings of the National Academy of Sciences of the United States of America.

[5]  Steven W. Zucker,et al.  On the Foundations of Relaxation Labeling Processes , 1983, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[6]  Paul Smolensky,et al.  Schema Selection and Stochastic Inference in Modular Environments , 1983, AAAI.

[7]  Geoffrey E. Hinton,et al.  Massively Parallel Architectures for AI: NETL, Thistle, and Boltzmann Machines , 1983, AAAI.

[8]  Geoffrey E. Hinton,et al.  Parallel visual computation , 1983, Nature.

[9]  Lalit R. Bahl,et al.  A Maximum Likelihood Approach to Continuous Speech Recognition , 1983, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[10]  C. D. Gelatt,et al.  Optimization by Simulated Annealing , 1983, Science.

[11]  Donald Geman,et al.  Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images , 1984, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[12]  A G Barto,et al.  Learning by statistical cooperation of self-interested neuron-like computing elements. , 1985, Human neurobiology.

[13]  Geoffrey E. Hinton,et al.  A Learning Algorithm for Boltzmann Machines , 1985, Cogn. Sci..

[14]  Geoffrey E. Hinton,et al.  Symbols Among the Neurons: Details of a Connectionist Inference Architecture , 1985, IJCAI.

[15]  W. Daniel Hillis,et al.  The connection machine , 1985 .

[16]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[17]  T. D. Harrison,et al.  Boltzmann machines for speech recognition , 1986 .

[18]  Terrence J. Sejnowski,et al.  NETtalk: a parallel network that learns to read aloud , 1988 .

[19]  Jerome A. Feldman,et al.  Connectionist Models and Their Properties , 1982, Cogn. Sci..

[20]  Geoffrey E. Hinton Learning distributed representations of concepts. , 1989 .