Adaptive Optimization of Neural Algorithms

Learning neural algorithms are generally very simple, but the convergence is not very fast and robust. In this paper we address the important problem of optimum learning rate adjustement according to an adaptive procedure based on gradient method. The basic idea, very simple, which has already been successfully used in Signal Processing, is extended to 2 neural algorithms : Kohonen self-organizing maps and blind separation of sources (Herault-Jutten algorithm). Although this procedure increases the algorithms complexity, it remains very interesting : -the convergence speed is strongly boosted, -the local nature of learning rule is retained, -the method is applicable to some rule, even if we do not know the cost function (error) which is minimized.