One-shot learning of human–robot handovers with triadic interaction meshes

David Vogt, Simon Stepputtis, Bernhard Jung, Hani Ben Amor

Research output: Contribution to journalArticle

Abstract

We propose an imitation learning methodology that allows robots to seamlessly retrieve and pass objects to and from human users. Instead of hand-coding interaction parameters, we extract relevant information such as joint correlations and spatial relationships from a single task demonstration of two humans. At the center of our approach is an interaction model that enables a robot to generalize an observed demonstration spatially and temporally to new situations. To this end, we propose a data-driven method for generating interaction meshes that link both interaction partners to the manipulated object. The feasibility of the approach is evaluated in a within user study which shows that human–human task demonstration can lead to more natural and intuitive interactions with the robot.

Original languageEnglish (US)
Pages (from-to)1-13
Number of pages13
JournalAutonomous Robots
DOIs
StateAccepted/In press - Feb 6 2018

Fingerprint

Demonstrations
Robots

Keywords

  • Handover
  • Human–human demonstration
  • Human–robot interaction
  • Interaction mesh

ASJC Scopus subject areas

  • Artificial Intelligence

Cite this

One-shot learning of human–robot handovers with triadic interaction meshes. / Vogt, David; Stepputtis, Simon; Jung, Bernhard; Ben Amor, Hani.

In: Autonomous Robots, 06.02.2018, p. 1-13.

Research output: Contribution to journalArticle

Vogt, David ; Stepputtis, Simon ; Jung, Bernhard ; Ben Amor, Hani. / One-shot learning of human–robot handovers with triadic interaction meshes. In: Autonomous Robots. 2018 ; pp. 1-13.
@article{c576d875e79f4bd08cda1cb15a6caf99,
title = "One-shot learning of human–robot handovers with triadic interaction meshes",
abstract = "We propose an imitation learning methodology that allows robots to seamlessly retrieve and pass objects to and from human users. Instead of hand-coding interaction parameters, we extract relevant information such as joint correlations and spatial relationships from a single task demonstration of two humans. At the center of our approach is an interaction model that enables a robot to generalize an observed demonstration spatially and temporally to new situations. To this end, we propose a data-driven method for generating interaction meshes that link both interaction partners to the manipulated object. The feasibility of the approach is evaluated in a within user study which shows that human–human task demonstration can lead to more natural and intuitive interactions with the robot.",
keywords = "Handover, Human–human demonstration, Human–robot interaction, Interaction mesh",
author = "David Vogt and Simon Stepputtis and Bernhard Jung and {Ben Amor}, Hani",
year = "2018",
month = "2",
day = "6",
doi = "10.1007/s10514-018-9699-4",
language = "English (US)",
pages = "1--13",
journal = "Autonomous Robots",
issn = "0929-5593",
publisher = "Springer Netherlands",

}

TY - JOUR

T1 - One-shot learning of human–robot handovers with triadic interaction meshes

AU - Vogt, David

AU - Stepputtis, Simon

AU - Jung, Bernhard

AU - Ben Amor, Hani

PY - 2018/2/6

Y1 - 2018/2/6

N2 - We propose an imitation learning methodology that allows robots to seamlessly retrieve and pass objects to and from human users. Instead of hand-coding interaction parameters, we extract relevant information such as joint correlations and spatial relationships from a single task demonstration of two humans. At the center of our approach is an interaction model that enables a robot to generalize an observed demonstration spatially and temporally to new situations. To this end, we propose a data-driven method for generating interaction meshes that link both interaction partners to the manipulated object. The feasibility of the approach is evaluated in a within user study which shows that human–human task demonstration can lead to more natural and intuitive interactions with the robot.

AB - We propose an imitation learning methodology that allows robots to seamlessly retrieve and pass objects to and from human users. Instead of hand-coding interaction parameters, we extract relevant information such as joint correlations and spatial relationships from a single task demonstration of two humans. At the center of our approach is an interaction model that enables a robot to generalize an observed demonstration spatially and temporally to new situations. To this end, we propose a data-driven method for generating interaction meshes that link both interaction partners to the manipulated object. The feasibility of the approach is evaluated in a within user study which shows that human–human task demonstration can lead to more natural and intuitive interactions with the robot.

KW - Handover

KW - Human–human demonstration

KW - Human–robot interaction

KW - Interaction mesh

UR - http://www.scopus.com/inward/record.url?scp=85041574507&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85041574507&partnerID=8YFLogxK

U2 - 10.1007/s10514-018-9699-4

DO - 10.1007/s10514-018-9699-4

M3 - Article

SP - 1

EP - 13

JO - Autonomous Robots

JF - Autonomous Robots

SN - 0929-5593

ER -