Training neural networks with PSO in dynamic environments

Supervised neural networks (NNs) have been successfully applied to solve classification problems. Various NN training algorithms were developed, including the particle swarm optimiser (PSO), which was proved to outperform the standard back propagation training algorithm on a selection of problems. It was, however, usually assumed that the decision boundaries do not change over time. Such assumption is often not valid for real life problems, and training algorithms have to be adapted to track the changing decision boundaries and detect new boundaries as they appear. Various dynamic versions of the PSO have already been developed, and this paper investigates the applicability of dynamic PSO to NN training in changing environments.

[1]  Andries Petrus Engelbrecht,et al.  Cooperative learning in neural networks using particle swarm optimizers , 2000, South Afr. Comput. J..

[2]  Peter J. Bentley,et al.  Dynamic Search With Charged Swarms , 2002, GECCO.

[3]  Alexey Tsymbal,et al.  The problem of concept drift: definitions and related work , 2004 .

[4]  Andries Petrus Engelbrecht,et al.  Fundamentals of Computational Swarm Intelligence , 2005 .

[5]  Gerry Dozier,et al.  Applying the particle swarm optimizer to non-stationary environments , 2002 .

[6]  Andries Petrus Engelbrecht,et al.  A Cooperative approach to particle swarm optimization , 2004, IEEE Transactions on Evolutionary Computation.

[7]  Jürgen Branke,et al.  Proceedings of the Workshop on Evolutionary Algorithms for Dynamic Optimization Problems (EvoDOP-2003) held in conjunction with the Genetic and Evolutionary Computation Conference (GECCO-2003), 12 July 2003, Chicago, USA [online] , 2003 .

[8]  José Boaventura Cunha,et al.  Non-linear concentration control system design using a new adaptive particle swarm optimiser , 2002 .

[9]  Andries Petrus Engelbrecht,et al.  Cooperative charged particle swarm optimiser , 2008, 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence).

[10]  Maurice Clerc,et al.  The particle swarm - explosion, stability, and convergence in a multidimensional complex space , 2002, IEEE Trans. Evol. Comput..

[11]  Jürgen Branke,et al.  Multiswarms, exclusion, and anti-convergence in dynamic environments , 2006, IEEE Transactions on Evolutionary Computation.

[12]  James Kennedy,et al.  Particle swarm optimization , 1995, Proceedings of ICNN'95 - International Conference on Neural Networks.

[13]  S. Lawrence,et al.  Function Approximation with Neural Networks and Local Methods: Bias, Variance and Smoothness , 1996 .

[14]  R. Eberhart,et al.  Comparing inertia weights and constriction factors in particle swarm optimization , 2000, Proceedings of the 2000 Congress on Evolutionary Computation. CEC00 (Cat. No.00TH8512).

[15]  Sun Zhiyi,et al.  Application of combined neural networks in nonlinear function approximation , 2000, Proceedings of the 3rd World Congress on Intelligent Control and Automation (Cat. No.00EX393).

[16]  James F. Frenzel,et al.  Training product unit neural networks with genetic algorithms , 1993, IEEE Expert.

[17]  Frans van den Bergh,et al.  An analysis of particle swarm optimizers , 2002 .

[18]  Xiaodong Li,et al.  Comparing particle swarms for tracking extrema in dynamic environments , 2003, The 2003 Congress on Evolutionary Computation, 2003. CEC '03..