Grasping novel objects with a dexterous robotic hand through neuroevolution

Robotic grasping of a target object without advance knowledge of its three-dimensional model is a challenging problem. Many studies indicate that robot learning from demonstration (LfD) is a promising way to improve grasping performance, but complete automation of the grasping task in unforeseen circumstances remains difficult. As an alternative to LfD, this paper leverages limited human supervision to achieve robotic grasping of unknown objects in unforeseen circumstances. The technical question is what form of human supervision best minimizes the effort of the human supervisor. The approach here applies a human-supplied bounding box to focus the robot's visual processing on the target object, thereby lessening the dimensionality of the robot's computer vision processing. After the human supervisor defines the bounding box through the man-machine interface, the rest of the grasping task is automated through a vision-based feature-extraction approach where the dexterous hand learns to grasp objects without relying on pre-computed object models through the NEAT neuroevolution algorithm. Given only low-level sensing data from a commercial depth sensor Kinect, our approach evolves neural networks to identify appropriate hand positions and orientations for grasping novel objects. Further, the machine learning results from simulation have been validated by transferring the training results to a physical robot called Dreamer made by the Meka Robotics company. The results demonstrate that grasping novel objects through exploiting neuroevolution from simulation to reality is possible.

[1]  Justus Piater Learning Visual Features to Predict Hand Orientations , 2002 .

[2]  Nick Jakobi,et al.  Minimal simulations for evolutionary robotics , 1998 .

[3]  Risto Miikkulainen,et al.  Evolving a real-world vehicle warning system , 2006, GECCO.

[4]  Peter K. Allen,et al.  Examples of 3D grasp quality computations , 1999, Proceedings 1999 IEEE International Conference on Robotics and Automation (Cat. No.99CH36288C).

[5]  Glauco Augusto de Paula Caurin,et al.  Learning how to grasp based on neural network retraining , 2013, Adv. Robotics.

[6]  Quoc V. Le,et al.  Grasping novel objects with depth segmentation , 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[7]  Quoc V. Le,et al.  Learning to grasp objects with multiple contact points , 2010, 2010 IEEE International Conference on Robotics and Automation.

[8]  Ashutosh Saxena,et al.  Robotic Grasping of Novel Objects using Vision , 2008, Int. J. Robotics Res..

[9]  Risto Miikkulainen,et al.  Evolving Neural Networks through Augmenting Topologies , 2002, Evolutionary Computation.

[10]  Richard M. Murray,et al.  A Mathematical Introduction to Robotic Manipulation , 1994 .

[11]  Henrik I. Christensen,et al.  Automatic grasp planning using shape primitives , 2003, 2003 IEEE International Conference on Robotics and Automation (Cat. No.03CH37422).

[12]  Ashutosh Saxena,et al.  Monocular depth perception and robotic grasping of novel objects , 2009 .

[13]  Risto Miikkulainen,et al.  Competitive Coevolution through Evolutionary Complexification , 2011, J. Artif. Intell. Res..

[14]  A. Miller,et al.  From robotic hands to human hands: a visualization and simulation engine for grasping research , 2005, Ind. Robot.

[15]  Shimon Edelman,et al.  Learning to grasp using visual information , 1996, Proceedings of IEEE International Conference on Robotics and Automation.

[16]  Steven M. LaValle,et al.  Planning algorithms , 2006 .

[17]  Peter K. Allen,et al.  An SVM learning approach to robotic grasping , 2004, IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA '04. 2004.

[18]  Nasser Rezzoug,et al.  Robotic Grasping: A Generic Neural Network Architecture , 2006 .

[19]  Josh C. Bongard Behavior Chaining - Incremental Behavior Integration for Evolutionary Robotics , 2008, ALIFE.

[20]  Hod Lipson,et al.  Evolutionary Robotics for Legged Machines: From Simulation to Physical Reality , 2006, IAS.

[21]  Justus H. Piater,et al.  Developing haptic and visual perceptual categories for reaching and grasping with a humanoid robot , 2001, Robotics Auton. Syst..

[22]  Jianwei Zhang,et al.  Self-valuing learning and generalization with application in visually guided grasping of complex objects , 2004, Robotics Auton. Syst..

[23]  Josh C. Bongard,et al.  The Utility of Evolving Simulated Robot Morphology Increases with Task Complexity for Object Manipulation , 2010, Artificial Life.

[24]  John F. Canny,et al.  Planning optimal grasps , 1992, Proceedings 1992 IEEE International Conference on Robotics and Automation.