Layered neural nets for pattern recognition

A pattern recognition concept involving first an 'invariance net' and second a 'trainable classifier' is proposed. The invariance net can be trained or designed to produce a set of outputs that are insensitive to translation, rotation, scale change, perspective change, etc., of the retinal input pattern. The outputs of the invariance net are scrambled, however. When these outputs are fed to a trainable classifier, the final outputs are descrambled and the original patterns are reproduced in standard position, orientation, scale, etc. It is expected that the same basic approach will be effective for speech recognition, where insensitivity to certain aspects of speech signals and at the same time sensitivity to other aspects of speech signals will be required. The entire recognition system is a layered network of ADALINE neurons. The ability to adapt a multilayered neural net is fundamental. An adaptation rule is proposed for layered nets which is an extension of the MADALINE rule of the 1960s. The new rule, MRII, is a useful alternative to the backpropagation algorithm. >