Self-Organized Formation of Various Invariant-Feature Filters in the Adaptive-Subspace SOM

The adaptive-subspace self-organizing map (ASSOM) is a modular neural network architecture, the modules of which learn to identify input patterns subject to some simple transformations. The learning process is unsupervised, competitive, and related to that of the traditional SOM (self-organizing map). Each neural module becomes adaptively specific to some restricted class of transformations, and modules close to each other in the network become tuned to similar features in an orderly fashion. If different transformations exist in the input signals, different subsets of ASSOM units become tuned to these transformation classes.

[1]  H. Hotelling Analysis of a complex of statistical variables into principal components. , 1933 .

[2]  F. Downton Stochastic Approximation , 1969, Nature.

[3]  Harold J. Kushner,et al.  wchastic. approximation methods for constrained and unconstrained systems , 1978 .

[4]  Michel Installe,et al.  Stochastic approximation methods , 1978 .

[5]  Allen Gersho,et al.  Asymptotically optimal block quantization , 1979, IEEE Trans. Inf. Theory.

[6]  J. Daugman Two-dimensional spectral analysis of cortical receptive field profiles , 1980, Vision Research.

[7]  S Marcelja,et al.  Mathematical description of the responses of simple cortical cells. , 1980, Journal of the Optical Society of America.

[8]  Erkki Oja,et al.  Subspace methods of pattern recognition , 1983 .

[9]  R. Gray,et al.  Vector quantization , 1984, IEEE ASSP Magazine.

[10]  J. Daugman Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters. , 1985, Journal of the Optical Society of America. A, Optics and image science.

[11]  J. Makhoul,et al.  Vector quantization in speech coding , 1985, Proceedings of the IEEE.

[12]  J. P. Jones,et al.  An evaluation of the two-dimensional Gabor filter model of simple receptive fields in cat striate cortex. , 1987, Journal of neurophysiology.

[13]  Teuvo Kohonen,et al.  Self-Organization and Associative Memory , 1988 .

[14]  Teuvo Kohonen,et al.  Self-organization and associative memory: 3rd edition , 1989 .

[15]  J. Rubner,et al.  A Self-Organizing Network for Principal-Component Analysis , 1989 .

[16]  Ingrid Daubechies,et al.  The wavelet transform, time-frequency localization and signal analysis , 1990, IEEE Trans. Inf. Theory.

[17]  Peter Földiák,et al.  Learning Invariance from Transformation Sequences , 1991, Neural Comput..

[18]  Erkki Oja,et al.  Principal components, minor components, and linear neural networks , 1992, Neural Networks.

[19]  Andrzej Cichocki,et al.  Neural networks for optimization and signal processing , 1993 .

[20]  G. Wallis,et al.  Learning invariant responses to the natural transformations of objects , 1993, Proceedings of 1993 International Conference on Neural Networks (IJCNN-93-Nagoya, Japan).

[21]  Joachim M. Buhmann,et al.  Vector quantization with complexity costs , 1993, IEEE Trans. Inf. Theory.

[22]  S. P. Luttrell,et al.  A Bayesian Analysis of Self-Organizing Maps , 1994, Neural Computation.

[23]  Marimuthu Palaniswami,et al.  Computational Intelligence: A Dynamic System Perspective , 1995 .

[24]  Teuvo Kohonen,et al.  Emergence of invariant-feature detectors in the adaptive-subspace self-organizing map , 1996, Biological Cybernetics.