An introduction to the parallel distributed processing model of cognition and some examples of how it is changing the teaching of artificial intelligence

Artificial Intelligence programming involves representing knowledge, using paradigms to manipulate the knowledge, and having a learning process modify both the knowledge and the paradigms. One could consider this process as building a model of how one thinks, i.e. how the brain operates at the cognitive psychology level [2]. Recently, cognitive scientists have developed a model of how one thinks at the neural level. This model is called the Parallel Distributed Processing (PDP) model of cognition and is described in the definitive work of Rumelhart and McClelland [1]. The idea that we can actually model the brain as an electrical network of neurons and then develop Artificial Intelligence in terms of the model is extremely attractive. The program has had some success, especially in the area of sensory perception and motor activity, but still has some problems to overcome before it can be said to be the ideal foundation for Artificial Intelligence. Much of the power of the PDP model derives from the learning algorithms. In this paper we consider a classification of learning algorithms that helps to organize the many developing techniques seen in the literature. We also discuss how the PDP model is changing the way we teach Artificial Intelligence. This is an important aspect of the PDP model, since the model has produced a number of new problem-solving techniques for Artificial Intelligence as well as holding out the promise of a better foundation for the basic theory of this field. If the PDP model fulfills its promise we would develop Artificial Intelligence programs that are really intelligent rather than programs that only appear to be intelligent.