One-shot learning of human–robot handovers with triadic interaction meshes

David Vogt, Simon Stepputtis, Bernhard Jung, Hani Ben Amor

Research output: Contribution to journalArticle

3 Scopus citations

Abstract

We propose an imitation learning methodology that allows robots to seamlessly retrieve and pass objects to and from human users. Instead of hand-coding interaction parameters, we extract relevant information such as joint correlations and spatial relationships from a single task demonstration of two humans. At the center of our approach is an interaction model that enables a robot to generalize an observed demonstration spatially and temporally to new situations. To this end, we propose a data-driven method for generating interaction meshes that link both interaction partners to the manipulated object. The feasibility of the approach is evaluated in a within user study which shows that human–human task demonstration can lead to more natural and intuitive interactions with the robot.

Original languageEnglish (US)
Pages (from-to)1-13
Number of pages13
JournalAutonomous Robots
DOIs
StateAccepted/In press - Feb 6 2018

Keywords

  • Handover
  • Human–human demonstration
  • Human–robot interaction
  • Interaction mesh

ASJC Scopus subject areas

  • Artificial Intelligence

Fingerprint Dive into the research topics of 'One-shot learning of human–robot handovers with triadic interaction meshes'. Together they form a unique fingerprint.

  • Cite this