Adaptability of the backpropagation procedure

Possible paradigms for concept learning by feedforward neural networks include discrimination and recognition. An interesting aspect of this dichotomy is that the recognition-based implementation can learn certain domains much more efficiently than the discrimination-based one, despite the close structural relationship between the two systems. The purpose of this paper is to explain this difference in efficiency. We suggest that it is caused by a difference in the generalization strategy adopted by the backpropagation procedure in both cases: while the autoassociator uses a (fast) bottom-up strategy, MLP has recourse to a (slow) top-down one, despite the fact that the two systems are both optimized by the backpropagation procedure. This result is important because it sheds some light on the nature of backpropagation's adaptive capability. From a practical viewpoint, it suggests a deterministic way to increase the efficiency of backpropagation-trained feedforward networks.

[1]  Paul W. Munro,et al.  Visualizations of 2-D hidden unit space , 1992, [Proceedings 1992] IJCNN International Joint Conference on Neural Networks.

[2]  Nathalie Japkowicz,et al.  Are we better off without Counter-Examples? , 1999 .

[3]  Nathalie Japkowicz,et al.  A Novelty Detection Approach to Classification , 1995, IJCAI.