Sign language for telemanipulation

Literal teleoperation doesn't work very well. Limited bandwidth, long latencies, non- anthropomorphic mappings all make the effort of teleoperation tedious at best and ineffective at worst. Instead, users of teleoperated and semi-autonomous systems want their robots to `just do it for them,' without sacrificing the operator's intent. Our goal is to maximize human strategic control in teleoperator assisted robotics. In our teleassisted regime, the human operator provides high-level contexts for low-level autonomous robot behaviors. The operator wears an EXOS hand master to communicate via a natural sign language, such as pointing to objects and adopting a grasp preshape. Each sign indicates intention: e.g., reaching or grasping; and, where applicable, a spatial context: e.g., the pointing axis or preshape frame. The robot, a Utah/MIT hand on a Puma arm, acts under local servo control within the proscribed contexts. This paper extends earlier work [Pook & Ballard 1994a] by adding remote visual sensors to the teleassistance repertoire. To view the robot site, the operator wears a virtual research helmet that is coupled to binocular cameras mounted on a second Puma 760. The combined hand-head sensors allows teleassistance to be performed remotely. The example task is to open a door. We also demonstrate the flexibility of the teleassistance model by bootstrapping a `pick and place' task from the door opening task.

[1]  Eileen Kowler,et al.  Reading twisted text: Implications for the role of saccades , 1987, Vision Research.

[2]  Dana H. Ballard,et al.  Principles of animate vision , 1992, CVGIP Image Underst..

[3]  Richard P. Paul,et al.  Operating Interaction and Teleprogramming for Subsea Manipulation , 1992 .

[4]  Dana H. Ballard,et al.  Deictic teleassistance , 1994, Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS'94).

[5]  Tsuneo Yoshikawa,et al.  Operation modes for cooperating with autonomous functions in intelligent teleoperation systems , 1992, [1992] Proceedings IEEE International Workshop on Robot and Human Communication.

[6]  M. Bornstein,et al.  Development in Infancy , 1982 .

[7]  Dana H. Ballard,et al.  Sensing qualitative events to control manipulation , 1992, Other Conferences.

[8]  Katsushi Ikeuchi,et al.  Grasp recognition and manipulative motion characterization from human hand motion sequences , 1994, Proceedings of the 1994 IEEE International Conference on Robotics and Automation.

[9]  David Chapman,et al.  Pengi: An Implementation of a Theory of Activity , 1987, AAAI.

[10]  D H Ballard,et al.  Hand-eye coordination during sequential tasks. , 1992, Philosophical transactions of the Royal Society of London. Series B, Biological sciences.

[11]  Katsushi Ikeuchi,et al.  Towards an assembly plan from observation : fine localization based on face contact constraints , 1991 .

[12]  Masayuki Inaba,et al.  Seeing, understanding and doing human task , 1992, Proceedings 1992 IEEE International Conference on Robotics and Automation.

[13]  Dana H. Ballard,et al.  Animate Vision , 1991, Artif. Intell..

[14]  M. Arbib Coordinated control programs for movements of the hand , 1985 .