Efficient Bayesian local model learning for control

Model-based control is essential for compliant control and force control in many modern complex robots, like humanoid or disaster robots. Due to many unknown and hard to model nonlinearities, analytical models of such robots are often only very rough approximations. However, modern optimization controllers frequently depend on reasonably accurate models, and degrade greatly in robustness and performance if model errors are too large. For a long time, machine learning has been expected to provide automatic empirical model synthesis, yet so far, research has only generated feasibility studies but no learning algorithms that run reliably on complex robots. In this paper, we combine two promising worlds of regression techniques to generate a more powerful regression learning system. On the one hand, locally weighted regression techniques are computationally efficient, but hard to tune due to a variety of data dependent meta-parameters. On the other hand, Bayesian regression has rather automatic and robust methods to set learning parameters, but becomes quickly computationally infeasible for big and high-dimensional data sets. By reducing the complexity of Bayesian regression in the spirit of local model learning through variational approximations, we arrive at a novel algorithm that is computationally efficient and easy to initialize for robust learning. Evaluations on several datasets demonstrate very good learning performance and the potential for a general regression learning tool for robotics.

[1]  Matthew J. Beal,et al.  Graphical Models and Variational Methods , 2001 .

[2]  Stefan Schaal,et al.  Kernel Carpentry for Online Regression Using Randomly Varying Coefficient Model , 2007, IJCAI.

[3]  Duy Nguyen-Tuong,et al.  Local Gaussian Process Regression for Real Time Online Model Learning , 2008, NIPS.

[4]  Giorgio Metta,et al.  Real-time model learning using Incremental Sparse Spectrum Gaussian Process Regression. , 2013, Neural networks : the official journal of the International Neural Network Society.

[5]  Michael E. Tipping Sparse Bayesian Learning and the Relevance Vector Machine , 2001, J. Mach. Learn. Res..

[6]  T. Hastie,et al.  Local Regression: Automatic Kernel Carpentry , 1993 .

[7]  Stefan Schaal,et al.  Locally Weighted Projection Regression: Incremental Real Time Learning in High Dimensional Space , 2000, ICML.

[8]  Michalis K. Titsias,et al.  Variational Learning of Inducing Variables in Sparse Gaussian Processes , 2009, AISTATS.

[9]  Carl E. Rasmussen,et al.  Sparse Spectrum Gaussian Process Regression , 2010, J. Mach. Learn. Res..

[10]  Carl E. Rasmussen,et al.  Gaussian processes for machine learning , 2005, Adaptive computation and machine learning.

[11]  Christopher G. Atkeson,et al.  Constructive Incremental Learning from Only Local Information , 1998, Neural Computation.

[12]  Marco F. Huber Recursive Gaussian process: On-line regression and learning , 2014, Pattern Recognit. Lett..

[13]  Stefan Schaal,et al.  The Bayesian backfitting relevance vector machine , 2004, ICML.

[14]  Geoffrey E. Hinton,et al.  Bayesian Learning for Neural Networks , 1995 .

[15]  George Eastman House,et al.  Sparse Bayesian Learning and the Relevance Vector Machine , 2001 .

[16]  Zoubin Ghahramani,et al.  Sparse Gaussian Processes using Pseudo-inputs , 2005, NIPS.

[17]  Stefan Schaal,et al.  Bayesian Kernel Shaping for Learning Control , 2008, NIPS.

[18]  Joaquin Quiñonero Candela,et al.  Incremental Gaussian Processes , 2002, NIPS.

[19]  Lehel Csató,et al.  Sparse On-Line Gaussian Processes , 2002, Neural Computation.