Analysis Tools for Neural Networks

A large volume of neural network research in the 1980’s involved applying backpropagation networks to difficult and generally poorly understood tasks. Success was sometimes measured on the ability of the network to replicate the required mapping. The difficulty with this approach, which is essentially a block box analysis, is that we are left with little additional understanding of the problem or the way in which the neural network has solved it. Techniques which can look inside the black box are required. This report focuses on two statistical analysis techniques (Principal Components Analysis and Canonical Discriminant Analysis) as tools for analysing and interpreting network behaviour in the hidden unit layers.

[1]  J. Elman Representation and structure in connectionist models , 1991 .

[2]  I. G. BONNER CLAPPISON Editor , 1960, The Electric Power Engineering Handbook - Five Volume Set.

[3]  C. Lee Giles,et al.  Higher Order Recurrent Networks and Grammatical Inference , 1989, NIPS.

[4]  David Haussler,et al.  What Size Net Gives Valid Generalization? , 1989, Neural Computation.

[5]  Leslie G. Valiant,et al.  A theory of the learnable , 1984, STOC '84.