Links

Tools

Export citation

Search in Google Scholar

Using vision and haptic sensing for human-humanoid haptic joint actions

Proceedings article published in 2013 by Don Joven Agravante, Andrea Cherubini ORCID, Abderrahmane Kheddar
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Question mark in circle
Preprint: policy unknown
Question mark in circle
Postprint: policy unknown
Question mark in circle
Published version: policy unknown

Abstract

Human-humanoid haptic joint actions are collaborative tasks requiring a sustained haptic interaction between both parties. As such, most research in this field has concentrated on how to use solely the robot's haptic sensing to extract the human partners' intentions. With this information, interaction controllers are designed. In this paper, the addition of visual sensing is investigated and a suitable framework is developed to accomplish this. This is then tested on examples of haptic joint actions namely collaboratively carrying a table. Additionally a visual task is implemented on top of this. In one case, the aim is to keep the table level taking into account gravity. In another case, a freely moving ball is balanced to keep it from falling off the table. The results of the experiments show that the framework is able to utilize both information sources properly to accomplish the task.