Linear neural networks which minimize the output variance

The authors analyze constrained linear architectures which learn according to Hebb's rule to minimize the output energy. They study the conditions under which such networks act as decorrelating (square-root) filters. In particular, it is shown how constrained architectures can decorrelate efficiently by simply using Hebb's rule. The authors extend the analysis to networks with arbitrary interconnections. The purpose is the design of useful architectures and the understanding of the functionality of patterns of connectivity observed in biological systems. The authors deal only with linear neurons performing simple linear combinations. The authors restrict attention to decorrelating networks which do not use the output variance to compress the input space into a new space with a smaller number of dimensions.<<ETX>>

[1]  Ralph Linsker,et al.  Self-organization in a perceptual network , 1988, Computer.

[2]  Sophocles J. Orfanidis,et al.  GramSchmidt Neural Nets , 1990, Neural Computation.

[3]  F. Palmieri,et al.  A comparison of two eigen-networks , 1991, IJCNN-91-Seattle International Joint Conference on Neural Networks.

[4]  Peter Földiák,et al.  Adaptation and decorrelation in the cortex , 1989 .

[5]  Teuvo Kohonen,et al.  Self-Organization and Associative Memory , 1988 .

[6]  S. Haykin,et al.  Adaptive Filter Theory , 1986 .