The gregarious particle swarm optimizer (G-PSO)

This paper presents a gregarious particle swarm optimization algorithm (G-PSO) in which the particles explore the search space by aggressively scouting the local minima with the help of only social knowledge. To avoid premature convergence of the swarm, the particles are re-initialized with a random velocity when stuck at a local minimum. Furthermore, G-PSO adopts a "reactive" determination of the step size, based on feedback from the last iterations. This is in contrast to the basic particle swarm algorithm, in which the particles explore the search space by using both the individual "cognitive" component and the "social" knowledge and no feedback is used for the self-tuning of algorithm parameters. The novel scheme presented, besides generally improving the average optimal values found, reduces the computation effort.

[1]  Russell C. Eberhart,et al.  Parameter Selection in Particle Swarm Optimization , 1998, Evolutionary Programming.

[2]  Hitoshi Iba,et al.  Enhancing differential evolution performance with local search for high dimensional function optimization , 2005, GECCO '05.

[3]  R. Eberhart,et al.  Fuzzy adaptive particle swarm optimization , 2001, Proceedings of the 2001 Congress on Evolutionary Computation (IEEE Cat. No.01TH8546).

[4]  Thomas Kiel Rasmussen,et al.  Hybrid Particle Swarm Optimiser with breeding and subpopulations , 2001 .

[5]  James Kennedy,et al.  The Behavior of Particles , 1998, Evolutionary Programming.

[6]  Andries Petrus Engelbrecht,et al.  A Cooperative approach to particle swarm optimization , 2004, IEEE Transactions on Evolutionary Computation.

[7]  Rainer Storn,et al.  Differential Evolution – A Simple and Efficient Heuristic for global Optimization over Continuous Spaces , 1997, J. Glob. Optim..

[8]  Riccardo Poli,et al.  Exploring extended particle swarms: a genetic programming approach , 2005, GECCO '05.

[9]  Russell C. Eberhart,et al.  Comparison between Genetic Algorithms and Particle Swarm Optimization , 1998, Evolutionary Programming.

[10]  R. Eberhart,et al.  Empirical study of particle swarm optimization , 1999, Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406).

[11]  Jing J. Liang,et al.  Novel composition test functions for numerical global optimization , 2005, Proceedings 2005 IEEE Swarm Intelligence Symposium, 2005. SIS 2005..

[12]  Riccardo Poli,et al.  Particle swarm optimization , 1995, Swarm Intelligence.

[13]  Xin Yao,et al.  Evolutionary programming made faster , 1999, IEEE Trans. Evol. Comput..

[14]  Martin Middendorf,et al.  A hierarchical particle swarm optimizer and its adaptive variant , 2005, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[15]  Peter J. Angeline,et al.  Evolutionary Optimization Versus Particle Swarm Optimization: Philosophy and Performance Differences , 1998, Evolutionary Programming.

[16]  James Kennedy,et al.  The particle swarm: social adaptation of knowledge , 1997, Proceedings of 1997 IEEE International Conference on Evolutionary Computation (ICEC '97).

[17]  Roberto Battiti,et al.  Learning with first, second, and no derivatives: A case study in high energy physics , 1994, Neurocomputing.

[18]  Saman K. Halgamuge,et al.  Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients , 2004, IEEE Transactions on Evolutionary Computation.

[19]  José Neves,et al.  The fully informed particle swarm: simpler, maybe better , 2004, IEEE Transactions on Evolutionary Computation.

[20]  P. J. Angeline,et al.  Using selection to improve particle swarm optimization , 1998, 1998 IEEE International Conference on Evolutionary Computation Proceedings. IEEE World Congress on Computational Intelligence (Cat. No.98TH8360).