Self-association in multilayer linear networks with limited connectivity

We study the behavior of linear neural networks having constrained connectivity of Hebbian and anti-Hebbian synapses. We derive some general results for cascade architectures and a formula for the number of layers necessary to obtain a sufficiently close approximation to the principal components at the final outputs. Results of a number of simulations confirm the analyses.

[1]  J. Rubner,et al.  A self-organizing network for principal-component analysis , 1989 .

[2]  Lei Xu,et al.  Least mean square error reconstruction principle for self-organizing neural-nets , 1993, Neural Networks.

[3]  Sun-Yuan Kung,et al.  A neural network learning algorithm for adaptive principal component extraction (APEX) , 1990, International Conference on Acoustics, Speech, and Signal Processing.

[4]  Jie Zhu,et al.  Anti-Hebbian learning in topologically constrained linear networks: a tutorial , 1993, IEEE Trans. Neural Networks.

[5]  Peter Henrici,et al.  An estimate for the norms of certain cyclic Jacobi operators , 1968 .

[6]  Pierre Baldi,et al.  Linear Learning: Landscapes and Algorithms , 1988, NIPS.

[7]  Terence D. Sanger,et al.  Optimal unsupervised learning in a single-layer linear feedforward neural network , 1989, Neural Networks.

[8]  Simon Haykin,et al.  Adaptive filter theory (2nd ed.) , 1991 .

[9]  Jie Zhu,et al.  Self-association and Hebbian learning in linear neural networks , 1995, IEEE Trans. Neural Networks.

[10]  Franklin T. Luk,et al.  A Proof of Convergence for Two Parallel Jacobi SVD Algorithms , 1989, IEEE Trans. Computers.

[11]  Kurt Hornik,et al.  Neural networks and principal component analysis: Learning from examples without local minima , 1989, Neural Networks.

[12]  R. Brockett Dynamical systems that sort lists, diagonalize matrices, and solve linear programming problems , 1991 .

[13]  Christopher M. Bishop,et al.  Mixtures of Probabilistic Principal Component Analyzers , 1999, Neural Computation.