Abstract
We propose an imitation learning methodology that allows robots to seamlessly retrieve and pass objects to and from human users. Instead of hand-coding interaction parameters, we extract relevant information such as joint correlations and spatial relationships from a single task demonstration of two humans. At the center of our approach is an interaction model that enables a robot to generalize an observed demonstration spatially and temporally to new situations. To this end, we propose a data-driven method for generating interaction meshes that link both interaction partners to the manipulated object. The feasibility of the approach is evaluated in a within user study which shows that human–human task demonstration can lead to more natural and intuitive interactions with the robot.
Original language | English (US) |
---|---|
Pages (from-to) | 1053-1065 |
Number of pages | 13 |
Journal | Autonomous Robots |
Volume | 42 |
Issue number | 5 |
DOIs | |
State | Published - Jun 1 2018 |
Keywords
- Handover
- Human–human demonstration
- Human–robot interaction
- Interaction mesh
ASJC Scopus subject areas
- Artificial Intelligence