Learning human-robot interactions from human-human demonstrations (with applications in Lego rocket assembly)

David Vogt, Simon Stepputtis, Richard Weinhold, Bernhard Jung, Hani Ben Amor

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

This video demonstrates a novel imitation learning approach for learning human-robot interactions from human-human demonstrations. During training, the movements of two human interaction partners are recorded via motion capture. From this, an interaction model is learned that inherently captures important spatial relationships as well as temporal synchrony of body movements between the two interacting partners. The interaction model is based on interaction meshes that were first adopted by the computer graphics community for the offline animation of interacting virtual characters. We developed a variant of interaction meshes that is suitable for real-time human-robot interaction scenarios. During humanrobot collaboration, the learned interaction model allows for adequate spatio-temporal adaptation of the robots behavior to the movements of the human cooperation partner. Thus, the presented approach is well suited for collaborative tasks requiring continuous body movement coordination of a human and a robot. The feasibility of the approach is demonstrated with the example of a cooperative Lego rocket assembly task.

Original languageEnglish (US)
Title of host publicationHumanoids 2016 - IEEE-RAS International Conference on Humanoid Robots
PublisherIEEE Computer Society
Pages142-143
Number of pages2
ISBN (Electronic)9781509047185
DOIs
StatePublished - Dec 30 2016
Event16th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2016 - Cancun, Mexico
Duration: Nov 15 2016Nov 17 2016

Other

Other16th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2016
CountryMexico
CityCancun
Period11/15/1611/17/16

Fingerprint

Human robot interaction
Rockets
Demonstrations
Robots
Computer graphics
Animation

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Vision and Pattern Recognition
  • Hardware and Architecture
  • Human-Computer Interaction
  • Electrical and Electronic Engineering

Cite this

Vogt, D., Stepputtis, S., Weinhold, R., Jung, B., & Ben Amor, H. (2016). Learning human-robot interactions from human-human demonstrations (with applications in Lego rocket assembly). In Humanoids 2016 - IEEE-RAS International Conference on Humanoid Robots (pp. 142-143). [7803267] IEEE Computer Society. https://doi.org/10.1109/HUMANOIDS.2016.7803267

Learning human-robot interactions from human-human demonstrations (with applications in Lego rocket assembly). / Vogt, David; Stepputtis, Simon; Weinhold, Richard; Jung, Bernhard; Ben Amor, Hani.

Humanoids 2016 - IEEE-RAS International Conference on Humanoid Robots. IEEE Computer Society, 2016. p. 142-143 7803267.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Vogt, D, Stepputtis, S, Weinhold, R, Jung, B & Ben Amor, H 2016, Learning human-robot interactions from human-human demonstrations (with applications in Lego rocket assembly). in Humanoids 2016 - IEEE-RAS International Conference on Humanoid Robots., 7803267, IEEE Computer Society, pp. 142-143, 16th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2016, Cancun, Mexico, 11/15/16. https://doi.org/10.1109/HUMANOIDS.2016.7803267
Vogt D, Stepputtis S, Weinhold R, Jung B, Ben Amor H. Learning human-robot interactions from human-human demonstrations (with applications in Lego rocket assembly). In Humanoids 2016 - IEEE-RAS International Conference on Humanoid Robots. IEEE Computer Society. 2016. p. 142-143. 7803267 https://doi.org/10.1109/HUMANOIDS.2016.7803267
Vogt, David ; Stepputtis, Simon ; Weinhold, Richard ; Jung, Bernhard ; Ben Amor, Hani. / Learning human-robot interactions from human-human demonstrations (with applications in Lego rocket assembly). Humanoids 2016 - IEEE-RAS International Conference on Humanoid Robots. IEEE Computer Society, 2016. pp. 142-143
@inproceedings{5103b368c7014f3aa3464a5799d5d161,
title = "Learning human-robot interactions from human-human demonstrations (with applications in Lego rocket assembly)",
abstract = "This video demonstrates a novel imitation learning approach for learning human-robot interactions from human-human demonstrations. During training, the movements of two human interaction partners are recorded via motion capture. From this, an interaction model is learned that inherently captures important spatial relationships as well as temporal synchrony of body movements between the two interacting partners. The interaction model is based on interaction meshes that were first adopted by the computer graphics community for the offline animation of interacting virtual characters. We developed a variant of interaction meshes that is suitable for real-time human-robot interaction scenarios. During humanrobot collaboration, the learned interaction model allows for adequate spatio-temporal adaptation of the robots behavior to the movements of the human cooperation partner. Thus, the presented approach is well suited for collaborative tasks requiring continuous body movement coordination of a human and a robot. The feasibility of the approach is demonstrated with the example of a cooperative Lego rocket assembly task.",
author = "David Vogt and Simon Stepputtis and Richard Weinhold and Bernhard Jung and {Ben Amor}, Hani",
year = "2016",
month = "12",
day = "30",
doi = "10.1109/HUMANOIDS.2016.7803267",
language = "English (US)",
pages = "142--143",
booktitle = "Humanoids 2016 - IEEE-RAS International Conference on Humanoid Robots",
publisher = "IEEE Computer Society",
address = "United States",

}

TY - GEN

T1 - Learning human-robot interactions from human-human demonstrations (with applications in Lego rocket assembly)

AU - Vogt, David

AU - Stepputtis, Simon

AU - Weinhold, Richard

AU - Jung, Bernhard

AU - Ben Amor, Hani

PY - 2016/12/30

Y1 - 2016/12/30

N2 - This video demonstrates a novel imitation learning approach for learning human-robot interactions from human-human demonstrations. During training, the movements of two human interaction partners are recorded via motion capture. From this, an interaction model is learned that inherently captures important spatial relationships as well as temporal synchrony of body movements between the two interacting partners. The interaction model is based on interaction meshes that were first adopted by the computer graphics community for the offline animation of interacting virtual characters. We developed a variant of interaction meshes that is suitable for real-time human-robot interaction scenarios. During humanrobot collaboration, the learned interaction model allows for adequate spatio-temporal adaptation of the robots behavior to the movements of the human cooperation partner. Thus, the presented approach is well suited for collaborative tasks requiring continuous body movement coordination of a human and a robot. The feasibility of the approach is demonstrated with the example of a cooperative Lego rocket assembly task.

AB - This video demonstrates a novel imitation learning approach for learning human-robot interactions from human-human demonstrations. During training, the movements of two human interaction partners are recorded via motion capture. From this, an interaction model is learned that inherently captures important spatial relationships as well as temporal synchrony of body movements between the two interacting partners. The interaction model is based on interaction meshes that were first adopted by the computer graphics community for the offline animation of interacting virtual characters. We developed a variant of interaction meshes that is suitable for real-time human-robot interaction scenarios. During humanrobot collaboration, the learned interaction model allows for adequate spatio-temporal adaptation of the robots behavior to the movements of the human cooperation partner. Thus, the presented approach is well suited for collaborative tasks requiring continuous body movement coordination of a human and a robot. The feasibility of the approach is demonstrated with the example of a cooperative Lego rocket assembly task.

UR - http://www.scopus.com/inward/record.url?scp=85010208247&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85010208247&partnerID=8YFLogxK

U2 - 10.1109/HUMANOIDS.2016.7803267

DO - 10.1109/HUMANOIDS.2016.7803267

M3 - Conference contribution

SP - 142

EP - 143

BT - Humanoids 2016 - IEEE-RAS International Conference on Humanoid Robots

PB - IEEE Computer Society

ER -