Abstract

One critical aspect of robotic visual learning is to capture the precedence relations among primitive actions from observing human performing manipulation activities. Current state-of-the-art spatial–temporal representations do not fully capture the precedence relations. In this paper, we present a novel activity representation: Manipulation Precedence Graph (MPG) and its associated overall planning module, with the goal to enable robot to learn manipulation activities from human demonstrations with overall planning. Experiments conducted on three publicly available manipulation activity video corpora as well as on a simulation platform validate that (1) the generated MPG from our system is robust given noisy detections from perception modules; (2) the overall planning module is able to generate parallel action sequences for robot to execute them in parallel; (3) the overall system improves robots’ manipulation execution efficiency.

Original languageEnglish (US)
Pages (from-to)126-135
Number of pages10
JournalRobotics and Autonomous Systems
Volume116
DOIs
StatePublished - Jun 1 2019

Fingerprint

Robot learning
Manipulation
Robot
Planning
Robots
Graph in graph theory
Module
Robotics
Demonstrations
Simulation Platform
Learning
Experiments
Experiment

Keywords

  • AI and robotics
  • Intelligent systems
  • Manipulation precedence graph
  • Understanding human activities

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Mathematics(all)
  • Computer Science Applications

Cite this

Robot learning of manipulation activities with overall planning through precedence graph. / Ye, Xin; Lin, Zhe; Yang, Yezhou.

In: Robotics and Autonomous Systems, Vol. 116, 01.06.2019, p. 126-135.

Research output: Contribution to journalArticle

@article{8b0598bc4c894427809ba267c091d949,
title = "Robot learning of manipulation activities with overall planning through precedence graph",
abstract = "One critical aspect of robotic visual learning is to capture the precedence relations among primitive actions from observing human performing manipulation activities. Current state-of-the-art spatial–temporal representations do not fully capture the precedence relations. In this paper, we present a novel activity representation: Manipulation Precedence Graph (MPG) and its associated overall planning module, with the goal to enable robot to learn manipulation activities from human demonstrations with overall planning. Experiments conducted on three publicly available manipulation activity video corpora as well as on a simulation platform validate that (1) the generated MPG from our system is robust given noisy detections from perception modules; (2) the overall planning module is able to generate parallel action sequences for robot to execute them in parallel; (3) the overall system improves robots’ manipulation execution efficiency.",
keywords = "AI and robotics, Intelligent systems, Manipulation precedence graph, Understanding human activities",
author = "Xin Ye and Zhe Lin and Yezhou Yang",
year = "2019",
month = "6",
day = "1",
doi = "10.1016/j.robot.2019.03.011",
language = "English (US)",
volume = "116",
pages = "126--135",
journal = "Robotics and Autonomous Systems",
issn = "0921-8890",
publisher = "Elsevier",

}

TY - JOUR

T1 - Robot learning of manipulation activities with overall planning through precedence graph

AU - Ye, Xin

AU - Lin, Zhe

AU - Yang, Yezhou

PY - 2019/6/1

Y1 - 2019/6/1

N2 - One critical aspect of robotic visual learning is to capture the precedence relations among primitive actions from observing human performing manipulation activities. Current state-of-the-art spatial–temporal representations do not fully capture the precedence relations. In this paper, we present a novel activity representation: Manipulation Precedence Graph (MPG) and its associated overall planning module, with the goal to enable robot to learn manipulation activities from human demonstrations with overall planning. Experiments conducted on three publicly available manipulation activity video corpora as well as on a simulation platform validate that (1) the generated MPG from our system is robust given noisy detections from perception modules; (2) the overall planning module is able to generate parallel action sequences for robot to execute them in parallel; (3) the overall system improves robots’ manipulation execution efficiency.

AB - One critical aspect of robotic visual learning is to capture the precedence relations among primitive actions from observing human performing manipulation activities. Current state-of-the-art spatial–temporal representations do not fully capture the precedence relations. In this paper, we present a novel activity representation: Manipulation Precedence Graph (MPG) and its associated overall planning module, with the goal to enable robot to learn manipulation activities from human demonstrations with overall planning. Experiments conducted on three publicly available manipulation activity video corpora as well as on a simulation platform validate that (1) the generated MPG from our system is robust given noisy detections from perception modules; (2) the overall planning module is able to generate parallel action sequences for robot to execute them in parallel; (3) the overall system improves robots’ manipulation execution efficiency.

KW - AI and robotics

KW - Intelligent systems

KW - Manipulation precedence graph

KW - Understanding human activities

UR - http://www.scopus.com/inward/record.url?scp=85063732707&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85063732707&partnerID=8YFLogxK

U2 - 10.1016/j.robot.2019.03.011

DO - 10.1016/j.robot.2019.03.011

M3 - Article

VL - 116

SP - 126

EP - 135

JO - Robotics and Autonomous Systems

JF - Robotics and Autonomous Systems

SN - 0921-8890

ER -