Abstract
One critical aspect of robotic visual learning is to capture the precedence relations among primitive actions from observing human performing manipulation activities. Current state-of-the-art spatial–temporal representations do not fully capture the precedence relations. In this paper, we present a novel activity representation: Manipulation Precedence Graph (MPG) and its associated overall planning module, with the goal to enable robot to learn manipulation activities from human demonstrations with overall planning. Experiments conducted on three publicly available manipulation activity video corpora as well as on a simulation platform validate that (1) the generated MPG from our system is robust given noisy detections from perception modules; (2) the overall planning module is able to generate parallel action sequences for robot to execute them in parallel; (3) the overall system improves robots’ manipulation execution efficiency.
Original language | English (US) |
---|---|
Pages (from-to) | 126-135 |
Number of pages | 10 |
Journal | Robotics and Autonomous Systems |
Volume | 116 |
DOIs | |
State | Published - Jun 2019 |
Keywords
- AI and robotics
- Intelligent systems
- Manipulation precedence graph
- Understanding human activities
ASJC Scopus subject areas
- Control and Systems Engineering
- Software
- General Mathematics
- Computer Science Applications