Experiments with Motor Primitives in Table Tennis

Efficient acquisition of new motor skills is among the most important abilities in order to make robot application more flexible, reduce the amount and cost of human programming as well as to make future robots more autonomous. However, most machine learning approaches to date are not capable to meet this challenge as they do not scale into the domain of high dimensional anthropomorphic and service robots. Instead, robot skill learning needs to rely upon task-appropriate approaches and domain insights. A particularly powerful approach has been driven by the concept of re-usable motor primitives. These have been used to learn a variety of “elementary movements” such as striking movements (e.g., hitting a T-ball, striking a table tennis ball), rhythmic movements (e.g., drumming, gaits for legged locomotion, padlling balls on a string), grasping, jumping and many others. Here, we take the approach to the next level and show experimentally how most elements required for table tennis can be addressed using motor primitives. We show four important components: (i) We present a motor primitive formulation that can deal with hitting and striking movements. (ii) We show how these can be initialized by imitation learning and (iii) generalized by reinforcement learning. (iv) We show how selection, generalization and pruning for motor primitives can be dealt with using a mixture of motor primitives. The resulting experimental prototypes can be shown to work well in practice.

[1]  Nasser M. Nasrabadi,et al.  Pattern Recognition and Machine Learning , 2006, Technometrics.

[2]  Yasemin Altun,et al.  Relative Entropy Policy Search , 2010 .

[3]  Oliver Kroemer,et al.  Towards Motor Skill Learning for Robotics , 2007, ISRR.

[4]  Jan Peters,et al.  Reinforcement Learning to Adjust Robot Movements to New Situations , 2010, IJCAI.

[5]  Jan Peters,et al.  A biomimetic approach to robot table tennis , 2010, IROS.

[6]  Jan Peters,et al.  Learning table tennis with a Mixture of Motor Primitives , 2010, 2010 10th IEEE-RAS International Conference on Humanoid Robots.

[7]  Christoph H. Lampert,et al.  Movement templates for learning of hitting and batting , 2010, 2010 IEEE International Conference on Robotics and Automation.

[8]  Jun Nakanishi,et al.  Learning Attractor Landscapes for Learning Motor Primitives , 2002, NIPS.

[9]  Fumio Miyazaki,et al.  Learning to Dynamically Manipulate: A Table Tennis Robot Controls a Ball and Rallies with a Human Being , 2006 .

[10]  Jan Peters,et al.  Policy Search for Motor Primitives in Robotics , 2008, NIPS 2008.

[11]  Ricardo Baeza-Yates,et al.  Computer Science 2 , 1994 .

[12]  John T. Wen,et al.  A robot ping pong player: optimized mechanics, high performance 3D vision, and intelligent sensor control , 1990, Robotersysteme.

[13]  Jan Peters,et al.  Using Bayesian Dynamical Systems for Motion Template Libraries , 2008, NIPS.

[14]  Jun Morimoto,et al.  Learning Stylistic Dynamic Movement Primitives from multiple demonstrations , 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[15]  Wolfram Burgard,et al.  Robotics: Science and Systems XV , 2010 .

[16]  Jun Nakanishi,et al.  Learning Movement Primitives , 2005, ISRR.

[17]  Stefan Schaal,et al.  Reinforcement learning by reward-weighted regression for operational space control , 2007, ICML '07.

[18]  Jun Morimoto,et al.  Task-Specific Generalization of Discrete and Periodic Dynamic Movement Primitives , 2010, IEEE Transactions on Robotics.

[19]  Stefan Schaal,et al.  Dynamics systems vs. optimal control--a unifying view. , 2007, Progress in brain research.

[20]  Russell L. Anderson,et al.  A Robot Ping-Pong Player: Experiments in Real-Time Intelligent Control , 1988 .

[21]  Juan A. Méndez,et al.  Ping-pong player prototype , 2003, IEEE Robotics Autom. Mag..

[22]  Stefan Schaal,et al.  A Generalized Path Integral Control Approach to Reinforcement Learning , 2010, J. Mach. Learn. Res..