Assisted teleoperation in changing environments with a mixture of virtual guides

Haptic guidance is a powerful technique to combine the strengths of humans and autonomous systems for teleoperation. The autonomous system can provide haptic cues to enable the operator to perform precise movements; the operator can interfere with the plan of the autonomous system leveraging his/her superior cognitive capabilities. However, providing haptic cues such that the individual strengths are not impaired is challenging because low forces provide little guidance, whereas strong forces can hinder the operator in realizing his/her plan. Based on variational inference, we learn a Gaussian mixture model (GMM) over trajectories to accomplish a given task. The learned GMM is used to construct a potential field which determines the haptic cues. The potential field smoothly changes during teleoperation based on our updated belief over the plans and their respective phases. Furthermore, new plans are learned online when the operator does not follow any of the proposed plans or after changes in the environment. User studies confirm that our framework helps users perform teleoperation tasks more accurately than without haptic cues and, in some cases, faster. Moreover, we demonstrate the use of our framework to help a subject teleoperate a 7 DoF manipulator in a pick-and-place task. GRAPHICAL ABSTRACT

[1]  Sylvain Calinon,et al.  Learning from demonstration for semi-autonomous teleoperation , 2019, Auton. Robots.

[2]  Oussama Khatib,et al.  Learning potential functions from human demonstrations with encapsulated dynamic and compliant behaviors , 2017, Auton. Robots.

[3]  Antonio Franchi,et al.  A force-based bilateral teleoperation framework for aerial robots in contact with the environment , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[4]  Jan Peters,et al.  Assisting Movement Training and Execution With Visual and Haptic Feedback , 2018, Front. Neurorobot..

[5]  Brenan J. McCarragher,et al.  Human integration into robot control utilising potential fields , 1997, Proceedings of International Conference on Robotics and Automation.

[6]  Paulo Martins Engel,et al.  Incremental Learning of Multivariate Gaussian Mixture Models , 2010, SBIA.

[7]  Aude Billard,et al.  Avoidance of Convex and Concave Obstacles With Convergence Ensured Through Contraction , 2019, IEEE Robotics and Automation Letters.

[8]  Claudio Pacchierotti,et al.  A Haptic Shared-Control Architecture for Guided Multi-Target Robotic Grasping , 2020, IEEE Transactions on Haptics.

[9]  Richard Bloss How do you decommission a nuclear installation? Call in the robots , 2010, Ind. Robot.

[10]  Mingjun Zhong,et al.  Efficient Gradient-Free Variational Inference using Policy Search , 2018, ICML.

[11]  Radford M. Neal Pattern Recognition and Machine Learning , 2007, Technometrics.

[12]  Freek Stulp,et al.  Co-manipulation with multiple probabilistic virtual guides , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[13]  Jan Peters,et al.  Online Learning of an Open-Ended Skill Library for Collaborative Tasks , 2018, 2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids).

[14]  Jan Peters,et al.  Learning multiple collaborative tasks with a mixture of Interaction Primitives , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[15]  Sylvain Calinon,et al.  Variational Inference with Mixture Model Approximation: Robotic Applications , 2019, ArXiv.

[16]  Jan Peters,et al.  Probabilistic Movement Primitives , 2013, NIPS.

[17]  Chung Hyuk Park,et al.  Haptically Guided Teleoperation for Learning Manipulation Tasks , 2007 .

[18]  Pieter Abbeel,et al.  Apprenticeship learning via inverse reinforcement learning , 2004, ICML.