Online Bayesian changepoint detection for articulated motion models

We introduce CHAMP, an algorithm for online Bayesian changepoint detection in settings where it is difficult or undesirable to integrate over the parameters of candidate models. CHAMP is used in combination with several articulation models to detect changes in articulated motion of objects in the world, allowing a robot to infer physically-grounded task information. We focus on three settings where a changepoint model is appropriate: objects with intrinsic articulation relationships that can change over time, object-object contact that results in quasi-static articulated motion, and assembly tasks where each step changes articulation relationships. We experimentally demonstrate that this system can be used to infer various types of information from demonstration data including causal manipulation models, human-robot grasp correspondences, and skill verification tests.

[1]  Andrew Zisserman,et al.  MLESAC: A New Robust Estimator with Application to Estimating Image Geometry , 2000, Comput. Vis. Image Underst..

[2]  Jun Nakanishi,et al.  Learning Attractor Landscapes for Learning Motor Primitives , 2002, NIPS.

[3]  Pieter Abbeel,et al.  Apprenticeship learning via inverse reinforcement learning , 2004, ICML.

[4]  B. S. Manjunath,et al.  The multiRANSAC algorithm and its application to detect planar homographies , 2005, IEEE International Conference on Image Processing 2005.

[5]  S. Schaal Dynamic Movement Primitives -A Framework for Motor Control in Humans and Humanoid Robotics , 2006 .

[6]  Christopher M. Bishop,et al.  Pattern Recognition and Machine Learning (Information Science and Statistics) , 2006 .

[7]  N. Chopin Dynamic Detection of Change Points in Long Time Series , 2007 .

[8]  Nasser M. Nasrabadi,et al.  Pattern Recognition and Machine Learning , 2006, Technometrics.

[9]  P. Fearnhead,et al.  On‐line inference for multiple changepoint problems , 2007 .

[10]  Aude Billard,et al.  Handbook of Robotics Chapter 59 : Robot Programming by Demonstration , 2007 .

[11]  Michael I. Jordan,et al.  An HDP-HMM for systems with state persistence , 2008, ICML '08.

[12]  Manuel Lopes,et al.  Learning Object Affordances: From Sensory--Motor Coordination to Imitation , 2008, IEEE Transactions on Robotics.

[13]  Oliver Brock,et al.  Extracting Planar Kinematic Models Using Interactive Perception , 2008 .

[14]  Brett Browning,et al.  A survey of robot learning from demonstration , 2009, Robotics Auton. Syst..

[15]  Wolfram Burgard,et al.  A Probabilistic Framework for Learning Kinematic Models of Articulated Objects , 2011, J. Artif. Intell. Res..

[16]  Scott Kuindersma,et al.  Robot learning from demonstration by constructing skill trees , 2012, Int. J. Robotics Res..

[17]  Stefan Schaal,et al.  Towards Associative Skill Memories , 2012, 2012 12th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2012).

[18]  P. Fearnhead,et al.  Optimal detection of changepoints with a linear computational cost , 2011, 1101.1438.

[19]  Scott Niekum,et al.  Incremental Semantically Grounded Learning from Demonstration , 2013, Robotics: Science and Systems.

[20]  Siddhartha S. Srinivasa,et al.  Extrinsic dexterity: In-hand manipulation with external forces , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).