Learning in connectionist networks using the Alopex algorithm

The Alopex algorithm is described as a universal learning algorithm. The algorithm is stochastic and it can be used for learning in networks of any topology, including those with feedback. The neurons could contain any transfer function and the learning could involve minimization of any error measure. The efficacy of the algorithm is investigated by applying it on multilayer perceptrons to solve problems such as XOR, parity, and encoder. These results are compared with results obtained using a backpropagation learning algorithm. Taking the specific case of the XOR problem, it is shown that a smoother error surface with fewer local minima could be obtained by using an information-theoretic error measure. An appropriate 'annealing' scheme for the algorithm is described, and it is shown that Alopex can escape out of the local minima.<<ETX>>

[1]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[2]  E Harth,et al.  The inversion of sensory processing by feedback pathways: a model of visual cognitive functions. , 1987, Science.

[3]  J J Hopfield,et al.  Learning algorithms and probability distributions in feed-forward and feed-back networks. , 1987, Proceedings of the National Academy of Sciences of the United States of America.

[4]  Eric B. Baum,et al.  Supervised Learning of Probability Distributions by Neural Networks , 1987, NIPS.

[5]  A. S. Pandya,et al.  Alopex algorithm for training multilayer neural networks , 1991, [Proceedings] 1991 IEEE International Joint Conference on Neural Networks.

[6]  R. Szabo,et al.  A fast learning algorithm for neural network applications , 1991, Conference Proceedings 1991 IEEE International Conference on Systems, Man, and Cybernetics.