Plan explicability and predictability for robot task planning

Yu Zhang, Sarath Sreedharan, Anagha Kulkarni, Tathagata Chakraborti, Hankz Hankui Zhuo, Subbarao Kambhampati

Research output: Chapter in Book/Report/Conference proceedingConference contribution

20 Citations (Scopus)

Abstract

Intelligent robots and machines are becoming pervasive in human populated environments. A desirable capability of these agents is to respond to goal-oriented commands by autonomously constructing task plans. However, such autonomy can add significant cognitive load and potentially introduce safety risks to humans when agents behave in unexpected ways. Hence, for such agents to be helpful, one important requirement is for them to synthesize plans that can be easily understood by humans. While there exists previous work that studied socially acceptable robots that interact with humans in 'natural ways', and work that investigated legible motion planning, there is no general solution for high level task planning. To address this issue, we introduce the notions of plan explicability and predictability. To compute these measures, first, we postulate that humans understand agent plans by associating abstract tasks with agent actions, which can be considered as a labeling process. We learn the labeling scheme of humans for agent plans from training examples using conditional random fields (CRFs). Then, we use the learned model to label a new plan to compute its explicability and predictability. These measures can be used by agents to proactively choose or directly synthesize plans that are more explicable and predictable to humans. We provide evaluations on a synthetic domain and with a physical robot to demonstrate the effectiveness of our approach.

Original languageEnglish (US)
Title of host publicationICRA 2017 - IEEE International Conference on Robotics and Automation
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1313-1320
Number of pages8
ISBN (Electronic)9781509046331
DOIs
StatePublished - Jul 21 2017
Event2017 IEEE International Conference on Robotics and Automation, ICRA 2017 - Singapore, Singapore
Duration: May 29 2017Jun 3 2017

Other

Other2017 IEEE International Conference on Robotics and Automation, ICRA 2017
CountrySingapore
CitySingapore
Period5/29/176/3/17

Fingerprint

Robots
Planning
Labeling
Intelligent robots
Motion planning
Labels

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Software
  • Artificial Intelligence
  • Electrical and Electronic Engineering

Cite this

Zhang, Y., Sreedharan, S., Kulkarni, A., Chakraborti, T., Zhuo, H. H., & Kambhampati, S. (2017). Plan explicability and predictability for robot task planning. In ICRA 2017 - IEEE International Conference on Robotics and Automation (pp. 1313-1320). [7989155] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ICRA.2017.7989155

Plan explicability and predictability for robot task planning. / Zhang, Yu; Sreedharan, Sarath; Kulkarni, Anagha; Chakraborti, Tathagata; Zhuo, Hankz Hankui; Kambhampati, Subbarao.

ICRA 2017 - IEEE International Conference on Robotics and Automation. Institute of Electrical and Electronics Engineers Inc., 2017. p. 1313-1320 7989155.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Zhang, Y, Sreedharan, S, Kulkarni, A, Chakraborti, T, Zhuo, HH & Kambhampati, S 2017, Plan explicability and predictability for robot task planning. in ICRA 2017 - IEEE International Conference on Robotics and Automation., 7989155, Institute of Electrical and Electronics Engineers Inc., pp. 1313-1320, 2017 IEEE International Conference on Robotics and Automation, ICRA 2017, Singapore, Singapore, 5/29/17. https://doi.org/10.1109/ICRA.2017.7989155
Zhang Y, Sreedharan S, Kulkarni A, Chakraborti T, Zhuo HH, Kambhampati S. Plan explicability and predictability for robot task planning. In ICRA 2017 - IEEE International Conference on Robotics and Automation. Institute of Electrical and Electronics Engineers Inc. 2017. p. 1313-1320. 7989155 https://doi.org/10.1109/ICRA.2017.7989155
Zhang, Yu ; Sreedharan, Sarath ; Kulkarni, Anagha ; Chakraborti, Tathagata ; Zhuo, Hankz Hankui ; Kambhampati, Subbarao. / Plan explicability and predictability for robot task planning. ICRA 2017 - IEEE International Conference on Robotics and Automation. Institute of Electrical and Electronics Engineers Inc., 2017. pp. 1313-1320
@inproceedings{b7feb20e21194d27927c9dd906effd90,
title = "Plan explicability and predictability for robot task planning",
abstract = "Intelligent robots and machines are becoming pervasive in human populated environments. A desirable capability of these agents is to respond to goal-oriented commands by autonomously constructing task plans. However, such autonomy can add significant cognitive load and potentially introduce safety risks to humans when agents behave in unexpected ways. Hence, for such agents to be helpful, one important requirement is for them to synthesize plans that can be easily understood by humans. While there exists previous work that studied socially acceptable robots that interact with humans in 'natural ways', and work that investigated legible motion planning, there is no general solution for high level task planning. To address this issue, we introduce the notions of plan explicability and predictability. To compute these measures, first, we postulate that humans understand agent plans by associating abstract tasks with agent actions, which can be considered as a labeling process. We learn the labeling scheme of humans for agent plans from training examples using conditional random fields (CRFs). Then, we use the learned model to label a new plan to compute its explicability and predictability. These measures can be used by agents to proactively choose or directly synthesize plans that are more explicable and predictable to humans. We provide evaluations on a synthetic domain and with a physical robot to demonstrate the effectiveness of our approach.",
author = "Yu Zhang and Sarath Sreedharan and Anagha Kulkarni and Tathagata Chakraborti and Zhuo, {Hankz Hankui} and Subbarao Kambhampati",
year = "2017",
month = "7",
day = "21",
doi = "10.1109/ICRA.2017.7989155",
language = "English (US)",
pages = "1313--1320",
booktitle = "ICRA 2017 - IEEE International Conference on Robotics and Automation",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
address = "United States",

}

TY - GEN

T1 - Plan explicability and predictability for robot task planning

AU - Zhang, Yu

AU - Sreedharan, Sarath

AU - Kulkarni, Anagha

AU - Chakraborti, Tathagata

AU - Zhuo, Hankz Hankui

AU - Kambhampati, Subbarao

PY - 2017/7/21

Y1 - 2017/7/21

N2 - Intelligent robots and machines are becoming pervasive in human populated environments. A desirable capability of these agents is to respond to goal-oriented commands by autonomously constructing task plans. However, such autonomy can add significant cognitive load and potentially introduce safety risks to humans when agents behave in unexpected ways. Hence, for such agents to be helpful, one important requirement is for them to synthesize plans that can be easily understood by humans. While there exists previous work that studied socially acceptable robots that interact with humans in 'natural ways', and work that investigated legible motion planning, there is no general solution for high level task planning. To address this issue, we introduce the notions of plan explicability and predictability. To compute these measures, first, we postulate that humans understand agent plans by associating abstract tasks with agent actions, which can be considered as a labeling process. We learn the labeling scheme of humans for agent plans from training examples using conditional random fields (CRFs). Then, we use the learned model to label a new plan to compute its explicability and predictability. These measures can be used by agents to proactively choose or directly synthesize plans that are more explicable and predictable to humans. We provide evaluations on a synthetic domain and with a physical robot to demonstrate the effectiveness of our approach.

AB - Intelligent robots and machines are becoming pervasive in human populated environments. A desirable capability of these agents is to respond to goal-oriented commands by autonomously constructing task plans. However, such autonomy can add significant cognitive load and potentially introduce safety risks to humans when agents behave in unexpected ways. Hence, for such agents to be helpful, one important requirement is for them to synthesize plans that can be easily understood by humans. While there exists previous work that studied socially acceptable robots that interact with humans in 'natural ways', and work that investigated legible motion planning, there is no general solution for high level task planning. To address this issue, we introduce the notions of plan explicability and predictability. To compute these measures, first, we postulate that humans understand agent plans by associating abstract tasks with agent actions, which can be considered as a labeling process. We learn the labeling scheme of humans for agent plans from training examples using conditional random fields (CRFs). Then, we use the learned model to label a new plan to compute its explicability and predictability. These measures can be used by agents to proactively choose or directly synthesize plans that are more explicable and predictable to humans. We provide evaluations on a synthetic domain and with a physical robot to demonstrate the effectiveness of our approach.

UR - http://www.scopus.com/inward/record.url?scp=85027976998&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85027976998&partnerID=8YFLogxK

U2 - 10.1109/ICRA.2017.7989155

DO - 10.1109/ICRA.2017.7989155

M3 - Conference contribution

AN - SCOPUS:85027976998

SP - 1313

EP - 1320

BT - ICRA 2017 - IEEE International Conference on Robotics and Automation

PB - Institute of Electrical and Electronics Engineers Inc.

ER -