Learning functions across many orders of magnitudes

Learning non-linear functions can be hard when the magnitude of the target function is unknown beforehand, as most learning algorithms are not scale invariant. We propose an algorithm to adaptively normalize these targets. This is complementary to recent advances in input normalization. Importantly, the proposed method preserves the unnormalized outputs whenever the normalization is updated to avoid instability caused by non-stationarity. It can be combined with any learning algorithm and any non-linear function approximation, including the important special case of deep learning. We empirically validate the method in supervised learning and reinforcement learning and apply it to learning how to play Atari 2600 games. Previous work on applying deep learning to this domain relied on clipping the rewards to make learning in different games more homogeneous, but this uses the domain-specific knowledge that in these games counting rewards is often almost as informative as summing these. Using our adaptive normalization we can remove this heuristic without diminishing overall performance, and even improve performance on some games, such as Ms. Pac-Man and Centipede, on which previous methods did not perform well.

[1]  W. Newey,et al.  Asymmetric Least Squares Estimation and Testing , 1987 .

[2]  Richard S. Sutton,et al.  Adapting Bias by Gradient Descent: An Incremental Version of Delta-Bar-Delta , 1992, AAAI.

[3]  Simon Haykin,et al.  GradientBased Learning Applied to Document Recognition , 2001 .

[4]  Chris Watkins,et al.  Learning from delayed rewards , 1989 .

[5]  Razvan Pascanu,et al.  Natural Neural Networks , 2015, NIPS.

[6]  H. Robbins A Stochastic Approximation Method , 1951 .

[7]  Roger B. Grosse,et al.  Optimizing Neural Networks with Kronecker-factored Approximate Curvature , 2015, ICML.

[8]  Sepp Hochreiter,et al.  The Vanishing Gradient Problem During Learning Recurrent Neural Nets and Problem Solutions , 1998, Int. J. Uncertain. Fuzziness Knowl. Based Syst..

[9]  Jasper Snoek,et al.  Practical Bayesian Optimization of Machine Learning Algorithms , 2012, NIPS.

[10]  Bernard Widrow,et al.  Adaptive switching circuits , 1988 .

[11]  Hado Philip van Hasselt,et al.  Insights in reinforcement rearning : formal analysis and empirical evaluation of temporal-difference learning algorithms , 2011 .

[12]  Tom Schaul,et al.  Dueling Network Architectures for Deep Reinforcement Learning , 2015, ICML.

[13]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[14]  Heekuck Oh,et al.  Neural Networks for Pattern Recognition , 1993, Adv. Comput..

[15]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[16]  A. A. Mullin,et al.  Principles of neurodynamics , 1962 .

[17]  John Langford,et al.  Normalized Online Learning , 2013, UAI.

[18]  Patrick M. Pilarski,et al.  Tuning-free step-size adaptation , 2012, 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[19]  W S McCulloch,et al.  A logical calculus of the ideas immanent in nervous activity , 1990, The Philosophy of Artificial Intelligence.

[20]  Sergey Ioffe,et al.  Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.

[21]  Tom Schaul,et al.  No more pesky learning rates , 2012, ICML.

[22]  H. Kushner,et al.  Stochastic Approximation and Recursive Algorithms and Applications , 2003 .

[23]  Yoshua Bengio,et al.  Algorithms for Hyper-Parameter Optimization , 2011, NIPS.

[24]  Yoshua Bengio,et al.  Random Search for Hyper-Parameter Optimization , 2012, J. Mach. Learn. Res..

[25]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[26]  Shun-ichi Amari,et al.  Natural Gradient Works Efficiently in Learning , 1998, Neural Computation.

[27]  David Silver,et al.  Deep Reinforcement Learning with Double Q-Learning , 2015, AAAI.

[28]  Marc G. Bellemare,et al.  The Arcade Learning Environment: An Evaluation Platform for General Agents (Extended Abstract) , 2012, IJCAI.

[29]  Shane Legg,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[30]  Jürgen Schmidhuber,et al.  Deep learning in neural networks: An overview , 2014, Neural Networks.

[31]  Yoram Singer,et al.  Adaptive Subgradient Methods for Online Learning and Stochastic Optimization , 2011, J. Mach. Learn. Res..