Learning grasp affordance densities

We address the issue of learning and representing object grasp affordance models. We model grasp affordances with continuous probability density functions (grasp densities) which link object-relative grasp poses to their success probability. The underlying function representation is nonparametric and relies on kernel density estimation to provide a continuous model. Grasp densities are learned and refined from exploration, by letting a robot “play” with an object in a sequence of grasp-and-drop actions: the robot uses visual cues to generate a set of grasp hypotheses, which it then executes and records their outcomes. When a satisfactory amount of grasp data is available, an importance-sampling algorithm turns it into a grasp density. We evaluate our method in a largely autonomous learning experiment, run on three objects with distinct shapes. The experiment shows how learning increases success rates. It also measures the success rate of grasps chosen to maximize the probability of success, given reaching constraints.

[1]  R. Fisher Dispersion on a sphere , 1953, Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences.

[2]  M. E. Muller,et al.  A Note on the Generation of Random Normal Deviates , 1958 .

[3]  J. Gibson The Ecological Approach to Visual Perception , 1979 .

[4]  B. Silverman Density estimation for statistics and data analysis , 1986 .

[5]  Judea Pearl,et al.  Chapter 2 – BAYESIAN INFERENCE , 1988 .

[6]  Judea Pearl,et al.  Probabilistic reasoning in intelligent systems - networks of plausible inference , 1991, Morgan Kaufmann series in representation and reasoning.

[7]  A. Wood Simulation of the von mises fisher distribution , 1994 .

[8]  Karun B. Shimoga,et al.  Robot Grasp Synthesis Algorithms: A Survey , 1996, Int. J. Robotics Res..

[9]  Steven M. LaValle,et al.  RRT-connect: An efficient approach to single-query path planning , 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065).

[10]  Vijay Kumar,et al.  Robotic grasping and contact: a review , 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065).

[11]  Danica Kragic,et al.  Real-time tracking meets online grasp planning , 2001, Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No.01CH37164).

[12]  Nando de Freitas,et al.  Sequential Monte Carlo Methods in Practice , 2001, Statistics for Engineering and Information Science.

[13]  Henrik I. Christensen,et al.  Automatic grasp planning using shape primitives , 2003, 2003 IEEE International Conference on Robotics and Automation (Cat. No.03CH37422).

[14]  Danica Kragic,et al.  Interactive grasp learning based on human demonstration , 2004, IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA '04. 2004.

[15]  Ruzena Bajcsy,et al.  Active learning for vision-based robot grasping , 1996, Machine Learning.

[16]  Ruzena Bajcsy,et al.  Active Learning for Vision-Based Robot Grasping , 2005, Machine Learning.

[17]  Erik B. Sudderth Graphical models for visual object recognition and tracking , 2006 .

[18]  Angel R. Martinez,et al.  Computational Statistics Handbook with MATLAB, Second Edition (Chapman & Hall/Crc Computer Science & Data Analysis) , 2007 .

[19]  Roderic A. Grupen,et al.  A model of shared grasp affordances from demonstration , 2007, 2007 7th IEEE-RAS International Conference on Humanoid Robots.

[20]  Maya Cakmak,et al.  To Afford or Not to Afford: A New Formalization of Affordances Toward Affordance-Based Robot Control , 2007, Adapt. Behav..

[21]  Michel Verleysen,et al.  Nonlinear Dimensionality Reduction , 2021, Computer Vision.

[22]  Danica Kragic,et al.  Birth of the Object: Detection of Objectness and Extraction of Object Shape through Object-Action complexes , 2008, Int. J. Humanoid Robotics.

[23]  A. Fagg,et al.  Learning Grasp Affordances Through Human Demonstration , 2008 .

[24]  Ashutosh Saxena,et al.  Robotic Grasping of Novel Objects using Vision , 2008, Int. J. Robotics Res..

[25]  N. Kruger,et al.  Learning object-specific grasp affordance densities , 2009, 2009 IEEE 8th International Conference on Development and Learning.

[26]  Danica Kragic,et al.  Grasping known objects with humanoid robots: A box-based approach , 2009, 2009 International Conference on Advanced Robotics.

[27]  Justus H. Piater,et al.  A Probabilistic Framework for 3D Visual Object Representation , 2009, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[28]  Manuel Lopes,et al.  Learning grasping affordances from local visual descriptors , 2009, 2009 IEEE 8th International Conference on Development and Learning.

[29]  Matei T. Ciocarlie,et al.  Hand Posture Subspaces for Dexterous Robotic Grasping , 2009, Int. J. Robotics Res..

[30]  Justus H. Piater,et al.  Refining grasp affordance models by experience , 2010, 2010 IEEE International Conference on Robotics and Automation.

[31]  Danica Kragic,et al.  A strategy for grasping unknown objects based on co-planarity and colour information , 2010, Robotics Auton. Syst..

[32]  Oliver Kroemer,et al.  Combining active learning and reactive control for robot grasping , 2010, Robotics Auton. Syst..

[33]  Oliver Kroemer,et al.  Learning Continuous Grasp Affordances by Sensorimotor Exploration , 2010, From Motor Learning to Interaction Learning in Robots.

[34]  Justus H. Piater,et al.  Development of Object and Grasping Knowledge by Robot Exploration , 2010, IEEE Transactions on Autonomous Mental Development.

[35]  Jimmy A. Jørgensen,et al.  RobWorkSim - an Open Simulator for Sensor based Grasping , 2010, ISR/ROBOTIK.

[36]  Florentin Wörgötter,et al.  Visual Primitives: Local, Condensed, Semantically Rich Visual Descriptors and their Applications in Robotics , 2010, Int. J. Humanoid Robotics.

[37]  渡辺 亮平,et al.  Sequential Monte Carlo , 2005, Nonlinear Time Series Analysis.