Sequence-based multimodal apprenticeship learning for robot perception and decision making

Fei Han, Xue Yang, Yu Zhang, Hao Zhang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

Apprenticeship learning has recently attracted a wide attention due to its capability of allowing robots to learn physical tasks directly from demonstrations provided by human experts. Most previous techniques assumed that the state space is known a priori or employed simple state representations that usually suffer from perceptual aliasing. Different from previous research, we propose a novel approach named Sequence-based Multimodal Apprenticeship Learning (SMAL), which is capable to simultaneously fusing temporal information and multimodal data, and to integrate robot perception with decision making. To evaluate the SMAL approach, experiments are performed using both simulations and real-world robots in the challenging search and rescue scenarios. The empirical study has validated that our SMAL approach can effectively learn plans for robots to make decisions using sequence of multimodal observations. Experimental results have also showed that SMAL outperforms the baseline methods using individual images.

Original languageEnglish (US)
Title of host publicationICRA 2017 - IEEE International Conference on Robotics and Automation
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages2584-2591
Number of pages8
ISBN (Electronic)9781509046331
DOIs
StatePublished - Jul 21 2017
Event2017 IEEE International Conference on Robotics and Automation, ICRA 2017 - Singapore, Singapore
Duration: May 29 2017Jun 3 2017

Other

Other2017 IEEE International Conference on Robotics and Automation, ICRA 2017
CountrySingapore
CitySingapore
Period5/29/176/3/17

Fingerprint

Decision making
Robots
Demonstrations
Experiments

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Artificial Intelligence
  • Electrical and Electronic Engineering

Cite this

Han, F., Yang, X., Zhang, Y., & Zhang, H. (2017). Sequence-based multimodal apprenticeship learning for robot perception and decision making. In ICRA 2017 - IEEE International Conference on Robotics and Automation (pp. 2584-2591). [7989301] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ICRA.2017.7989301

Sequence-based multimodal apprenticeship learning for robot perception and decision making. / Han, Fei; Yang, Xue; Zhang, Yu; Zhang, Hao.

ICRA 2017 - IEEE International Conference on Robotics and Automation. Institute of Electrical and Electronics Engineers Inc., 2017. p. 2584-2591 7989301.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Han, F, Yang, X, Zhang, Y & Zhang, H 2017, Sequence-based multimodal apprenticeship learning for robot perception and decision making. in ICRA 2017 - IEEE International Conference on Robotics and Automation., 7989301, Institute of Electrical and Electronics Engineers Inc., pp. 2584-2591, 2017 IEEE International Conference on Robotics and Automation, ICRA 2017, Singapore, Singapore, 5/29/17. https://doi.org/10.1109/ICRA.2017.7989301
Han F, Yang X, Zhang Y, Zhang H. Sequence-based multimodal apprenticeship learning for robot perception and decision making. In ICRA 2017 - IEEE International Conference on Robotics and Automation. Institute of Electrical and Electronics Engineers Inc. 2017. p. 2584-2591. 7989301 https://doi.org/10.1109/ICRA.2017.7989301
Han, Fei ; Yang, Xue ; Zhang, Yu ; Zhang, Hao. / Sequence-based multimodal apprenticeship learning for robot perception and decision making. ICRA 2017 - IEEE International Conference on Robotics and Automation. Institute of Electrical and Electronics Engineers Inc., 2017. pp. 2584-2591
@inproceedings{bf88e73ba9484c9f99549ec6a002bada,
title = "Sequence-based multimodal apprenticeship learning for robot perception and decision making",
abstract = "Apprenticeship learning has recently attracted a wide attention due to its capability of allowing robots to learn physical tasks directly from demonstrations provided by human experts. Most previous techniques assumed that the state space is known a priori or employed simple state representations that usually suffer from perceptual aliasing. Different from previous research, we propose a novel approach named Sequence-based Multimodal Apprenticeship Learning (SMAL), which is capable to simultaneously fusing temporal information and multimodal data, and to integrate robot perception with decision making. To evaluate the SMAL approach, experiments are performed using both simulations and real-world robots in the challenging search and rescue scenarios. The empirical study has validated that our SMAL approach can effectively learn plans for robots to make decisions using sequence of multimodal observations. Experimental results have also showed that SMAL outperforms the baseline methods using individual images.",
author = "Fei Han and Xue Yang and Yu Zhang and Hao Zhang",
year = "2017",
month = "7",
day = "21",
doi = "10.1109/ICRA.2017.7989301",
language = "English (US)",
pages = "2584--2591",
booktitle = "ICRA 2017 - IEEE International Conference on Robotics and Automation",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - GEN

T1 - Sequence-based multimodal apprenticeship learning for robot perception and decision making

AU - Han, Fei

AU - Yang, Xue

AU - Zhang, Yu

AU - Zhang, Hao

PY - 2017/7/21

Y1 - 2017/7/21

N2 - Apprenticeship learning has recently attracted a wide attention due to its capability of allowing robots to learn physical tasks directly from demonstrations provided by human experts. Most previous techniques assumed that the state space is known a priori or employed simple state representations that usually suffer from perceptual aliasing. Different from previous research, we propose a novel approach named Sequence-based Multimodal Apprenticeship Learning (SMAL), which is capable to simultaneously fusing temporal information and multimodal data, and to integrate robot perception with decision making. To evaluate the SMAL approach, experiments are performed using both simulations and real-world robots in the challenging search and rescue scenarios. The empirical study has validated that our SMAL approach can effectively learn plans for robots to make decisions using sequence of multimodal observations. Experimental results have also showed that SMAL outperforms the baseline methods using individual images.

AB - Apprenticeship learning has recently attracted a wide attention due to its capability of allowing robots to learn physical tasks directly from demonstrations provided by human experts. Most previous techniques assumed that the state space is known a priori or employed simple state representations that usually suffer from perceptual aliasing. Different from previous research, we propose a novel approach named Sequence-based Multimodal Apprenticeship Learning (SMAL), which is capable to simultaneously fusing temporal information and multimodal data, and to integrate robot perception with decision making. To evaluate the SMAL approach, experiments are performed using both simulations and real-world robots in the challenging search and rescue scenarios. The empirical study has validated that our SMAL approach can effectively learn plans for robots to make decisions using sequence of multimodal observations. Experimental results have also showed that SMAL outperforms the baseline methods using individual images.

UR - http://www.scopus.com/inward/record.url?scp=85027961305&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85027961305&partnerID=8YFLogxK

U2 - 10.1109/ICRA.2017.7989301

DO - 10.1109/ICRA.2017.7989301

M3 - Conference contribution

AN - SCOPUS:85027961305

SP - 2584

EP - 2591

BT - ICRA 2017 - IEEE International Conference on Robotics and Automation

PB - Institute of Electrical and Electronics Engineers Inc.

ER -