The authors introduce two forms of unsymmetric principal component analysis (UPCA), namely the cross-correlation UPCA and the linear approximation UPCA problem. Both are concerned with the SVD of the input-teacher cross-correlation matrix itself (first problem) or after prewhitening (second problem). The second problem is also equivalent to reduced-rank Wiener filtering. For the former problem, the authors propose an unsymmetric linear model for extracting one or more components using lateral inhibition connections in the hidden layer. The numerical convergence properties of the model are theoretically established. For the linear approximation UPCA problem, one can apply back-propagation extended either using a straightforward deflation procedure or with the use of lateral orthogonalizing connections in the hidden layer. All proposed models were tested and the simulation results confirm the theoretical expectations.<<ETX>>
[1]
E. Oja.
Simplified neuron model as a principal component analyzer
,
1982,
Journal of mathematical biology.
[2]
Terence D. Sanger,et al.
Optimal unsupervised learning in a single-layer linear feedforward neural network
,
1989,
Neural Networks.
[3]
Kurt Hornik,et al.
Neural networks and principal component analysis: Learning from examples without local minima
,
1989,
Neural Networks.
[4]
Sun-Yuan Kung,et al.
A neural network learning algorithm for adaptive principal component extraction (APEX)
,
1990,
International Conference on Acoustics, Speech, and Signal Processing.
[5]
Louis L. Scharf,et al.
The SVD and reduced rank signal processing
,
1991,
Signal Process..