Baseline-Free Sampling in Parameter Exploring Policy Gradients: Super Symmetric PGPE

Policy Gradient methods that explore directly in parameter space are among the most effective and robust direct policy search methods and have drawn a lot of attention lately. The basic method from this field, Policy Gradients with Parameter-based Exploration, uses two samples that are symmetric around the current hypothesis to circumvent misleading reward in asymmetrical reward distributed problems gathered with the usual baseline approach. The exploration parameters are still updated by a baseline approach – leaving the exploration prone to asymmetric reward distributions. In this paper we will show how the exploration parameters can be sampled quasi-symmetrically despite having limited instead of free parameters for exploration.We give a transformation approximation to get quasi symmetric samples with respect to the exploration without changing the overall sampling distribution. Finally we will demonstrate that sampling symmetrically for the exploration parameters as well is superior to the original sampling approach, in terms of samples needed and robustness.

[1]  Lazaros S. Iliadis,et al.  Artificial Neural Networks - ICANN 2010 - 20th International Conference, Thessaloniki, Greece, September 15-18, 2010, Proceedings, Part I , 2010, International Conference on Artificial Neural Networks.

[2]  Tom Schaul,et al.  Efficient natural evolution strategies , 2009, GECCO.

[3]  Jun Morimoto,et al.  Efficient Sample Reuse in Policy Gradients with Parameter-Based Exploration , 2012, Neural Computation.

[4]  Frank Sehnke,et al.  Efficient Baseline-Free Sampling in Parameter Exploring Policy Gradients: Super Symmetric PGPE , 2013, ICANN.

[5]  Frank Sehnke Parameter exploring policy gradients and their implications , 2012 .

[6]  Frank Sehnke,et al.  Multimodal Parameter-exploring Policy Gradients , 2010, 2010 Ninth International Conference on Machine Learning and Applications.

[7]  Tom Schaul,et al.  Exploring parameter space in reinforcement learning , 2010, Paladyn J. Behav. Robotics.

[8]  Andreas Zell,et al.  An Automatic Approach to Online Color Training in RoboCup Environments , 2006, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[9]  Andreas Zell,et al.  Automatic Calibration of Camera to World Mapping in RoboCup using Evolutionary Algorithms , 2006, 2006 IEEE International Conference on Evolutionary Computation.

[10]  Gang Niu,et al.  Analysis and Improvement of Policy Gradient Estimation , 2011, NIPS.

[11]  Ronald L. Wasserstein,et al.  Monte Carlo: Concepts, Algorithms, and Applications , 1997 .

[12]  Ronald J. Williams,et al.  Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning , 2004, Machine Learning.

[13]  Günther Palm,et al.  Artificial Neural Networks and Machine Learning – ICANN 2013 , 2013, Lecture Notes in Computer Science.

[14]  Peter Henderson,et al.  A lazy evaluator , 1976, POPL.

[15]  Tom Schaul,et al.  Multi-Dimensional Deep Memory Atari-Go Players for Parameter Exploring Policy Gradients , 2010, ICANN.

[16]  Olivier Sigaud,et al.  Path Integral Policy Improvement with Covariance Matrix Adaptation , 2012, ICML.

[17]  Tom Schaul,et al.  Natural Evolution Strategies , 2008, 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence).

[18]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[19]  Isao Ono,et al.  Natural Policy Gradient Methods with Parameter-based Exploration for Control Tasks , 2010, NIPS.

[20]  Peter L. Bartlett,et al.  Variance Reduction Techniques for Gradient Estimates in Reinforcement Learning , 2001, J. Mach. Learn. Res..

[21]  Frank Sehnke,et al.  Parameter-exploring policy gradients , 2010, Neural Networks.

[22]  Jürgen Schmidhuber,et al.  Developmental robotics, optimal artificial curiosity, creativity, music, and the fine arts , 2006, Connect. Sci..