TY - GEN
T1 - Learning human-robot interactions from human-human demonstrations (with applications in Lego rocket assembly)
AU - Vogt, David
AU - Stepputtis, Simon
AU - Weinhold, Richard
AU - Jung, Bernhard
AU - Ben Amor, Hani
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2016/12/30
Y1 - 2016/12/30
N2 - This video demonstrates a novel imitation learning approach for learning human-robot interactions from human-human demonstrations. During training, the movements of two human interaction partners are recorded via motion capture. From this, an interaction model is learned that inherently captures important spatial relationships as well as temporal synchrony of body movements between the two interacting partners. The interaction model is based on interaction meshes that were first adopted by the computer graphics community for the offline animation of interacting virtual characters. We developed a variant of interaction meshes that is suitable for real-time human-robot interaction scenarios. During humanrobot collaboration, the learned interaction model allows for adequate spatio-temporal adaptation of the robots behavior to the movements of the human cooperation partner. Thus, the presented approach is well suited for collaborative tasks requiring continuous body movement coordination of a human and a robot. The feasibility of the approach is demonstrated with the example of a cooperative Lego rocket assembly task.
AB - This video demonstrates a novel imitation learning approach for learning human-robot interactions from human-human demonstrations. During training, the movements of two human interaction partners are recorded via motion capture. From this, an interaction model is learned that inherently captures important spatial relationships as well as temporal synchrony of body movements between the two interacting partners. The interaction model is based on interaction meshes that were first adopted by the computer graphics community for the offline animation of interacting virtual characters. We developed a variant of interaction meshes that is suitable for real-time human-robot interaction scenarios. During humanrobot collaboration, the learned interaction model allows for adequate spatio-temporal adaptation of the robots behavior to the movements of the human cooperation partner. Thus, the presented approach is well suited for collaborative tasks requiring continuous body movement coordination of a human and a robot. The feasibility of the approach is demonstrated with the example of a cooperative Lego rocket assembly task.
UR - http://www.scopus.com/inward/record.url?scp=85010208247&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85010208247&partnerID=8YFLogxK
U2 - 10.1109/HUMANOIDS.2016.7803267
DO - 10.1109/HUMANOIDS.2016.7803267
M3 - Conference contribution
AN - SCOPUS:85010208247
T3 - IEEE-RAS International Conference on Humanoid Robots
SP - 142
EP - 143
BT - Humanoids 2016 - IEEE-RAS International Conference on Humanoid Robots
PB - IEEE Computer Society
T2 - 16th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2016
Y2 - 15 November 2016 through 17 November 2016
ER -