Balancing explicability and explanation in human-aware planning

Sarath Sreedharan, Tathagata Chakraborti, Subbarao Kambhampati

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

Human aware planning requires an agent to be aware of the intentions, capabilities and mental model of the human in the loop during its decision process. This can involve generating plans that are explicable (Zhang et al. 2017) to a human observer as well as the ability to provide explanations (Chakraborti et al. 2017b) when such plans cannot be generated. This has led to the notion "multi-model planning" which aim to incorporate effects of human expectation in the deliberative process of a planner - either in the form of explicable task planning or explanations produced thereof. In this paper, we bring these two concepts together and show how a planner can account for both these needs and achieve a tradeoff during the plan generation process itself by means of a model-space search method MEGA. This in effect provides a comprehensive perspective of what it means for a decision making agent to be "human-aware" by bringing together existing principles of planning under the umbrella of a single plan generation process. We situate our discussion specifically keeping in mind the recent work on explicable planning and explanation generation, and illustrate these concepts in modified versions of two well known planning domains, as well as a demonstration on a robot involved in a typical search and reconnaissance task with an external supervisor.

Original languageEnglish (US)
Title of host publicationFS-17-01
Subtitle of host publicationArtificial Intelligence for Human-Robot Interaction; FS-17-02: Cognitive Assistance in Government and Public Sector Applications; FS-17-03: Deep Models and Artificial Intelligence for Military Applications: Potentials, Theories, Practices, Tools and Risks; FS-17-04: Human-Agent Groups: Studies, Algorithms and Challenges; FS-17-05: A Standard Model of the Mind
PublisherAI Access Foundation
Pages61-68
Number of pages8
VolumeFS-17-01 - FS-17-05
ISBN (Electronic)9781577357940
StatePublished - Jan 1 2017
Event2017 AAAI Fall Symposium - Arlington, United States
Duration: Nov 9 2017Nov 11 2017

Other

Other2017 AAAI Fall Symposium
CountryUnited States
CityArlington
Period11/9/1711/11/17

Fingerprint

Planning
Supervisory personnel
Demonstrations
Decision making
Robots

ASJC Scopus subject areas

  • Engineering(all)

Cite this

Sreedharan, S., Chakraborti, T., & Kambhampati, S. (2017). Balancing explicability and explanation in human-aware planning. In FS-17-01: Artificial Intelligence for Human-Robot Interaction; FS-17-02: Cognitive Assistance in Government and Public Sector Applications; FS-17-03: Deep Models and Artificial Intelligence for Military Applications: Potentials, Theories, Practices, Tools and Risks; FS-17-04: Human-Agent Groups: Studies, Algorithms and Challenges; FS-17-05: A Standard Model of the Mind (Vol. FS-17-01 - FS-17-05, pp. 61-68). AI Access Foundation.

Balancing explicability and explanation in human-aware planning. / Sreedharan, Sarath; Chakraborti, Tathagata; Kambhampati, Subbarao.

FS-17-01: Artificial Intelligence for Human-Robot Interaction; FS-17-02: Cognitive Assistance in Government and Public Sector Applications; FS-17-03: Deep Models and Artificial Intelligence for Military Applications: Potentials, Theories, Practices, Tools and Risks; FS-17-04: Human-Agent Groups: Studies, Algorithms and Challenges; FS-17-05: A Standard Model of the Mind. Vol. FS-17-01 - FS-17-05 AI Access Foundation, 2017. p. 61-68.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Sreedharan, S, Chakraborti, T & Kambhampati, S 2017, Balancing explicability and explanation in human-aware planning. in FS-17-01: Artificial Intelligence for Human-Robot Interaction; FS-17-02: Cognitive Assistance in Government and Public Sector Applications; FS-17-03: Deep Models and Artificial Intelligence for Military Applications: Potentials, Theories, Practices, Tools and Risks; FS-17-04: Human-Agent Groups: Studies, Algorithms and Challenges; FS-17-05: A Standard Model of the Mind. vol. FS-17-01 - FS-17-05, AI Access Foundation, pp. 61-68, 2017 AAAI Fall Symposium, Arlington, United States, 11/9/17.
Sreedharan S, Chakraborti T, Kambhampati S. Balancing explicability and explanation in human-aware planning. In FS-17-01: Artificial Intelligence for Human-Robot Interaction; FS-17-02: Cognitive Assistance in Government and Public Sector Applications; FS-17-03: Deep Models and Artificial Intelligence for Military Applications: Potentials, Theories, Practices, Tools and Risks; FS-17-04: Human-Agent Groups: Studies, Algorithms and Challenges; FS-17-05: A Standard Model of the Mind. Vol. FS-17-01 - FS-17-05. AI Access Foundation. 2017. p. 61-68
Sreedharan, Sarath ; Chakraborti, Tathagata ; Kambhampati, Subbarao. / Balancing explicability and explanation in human-aware planning. FS-17-01: Artificial Intelligence for Human-Robot Interaction; FS-17-02: Cognitive Assistance in Government and Public Sector Applications; FS-17-03: Deep Models and Artificial Intelligence for Military Applications: Potentials, Theories, Practices, Tools and Risks; FS-17-04: Human-Agent Groups: Studies, Algorithms and Challenges; FS-17-05: A Standard Model of the Mind. Vol. FS-17-01 - FS-17-05 AI Access Foundation, 2017. pp. 61-68
@inproceedings{320fc7e314bd48ff9bca3a1c68701033,
title = "Balancing explicability and explanation in human-aware planning",
abstract = "Human aware planning requires an agent to be aware of the intentions, capabilities and mental model of the human in the loop during its decision process. This can involve generating plans that are explicable (Zhang et al. 2017) to a human observer as well as the ability to provide explanations (Chakraborti et al. 2017b) when such plans cannot be generated. This has led to the notion {"}multi-model planning{"} which aim to incorporate effects of human expectation in the deliberative process of a planner - either in the form of explicable task planning or explanations produced thereof. In this paper, we bring these two concepts together and show how a planner can account for both these needs and achieve a tradeoff during the plan generation process itself by means of a model-space search method MEGA. This in effect provides a comprehensive perspective of what it means for a decision making agent to be {"}human-aware{"} by bringing together existing principles of planning under the umbrella of a single plan generation process. We situate our discussion specifically keeping in mind the recent work on explicable planning and explanation generation, and illustrate these concepts in modified versions of two well known planning domains, as well as a demonstration on a robot involved in a typical search and reconnaissance task with an external supervisor.",
author = "Sarath Sreedharan and Tathagata Chakraborti and Subbarao Kambhampati",
year = "2017",
month = "1",
day = "1",
language = "English (US)",
volume = "FS-17-01 - FS-17-05",
pages = "61--68",
booktitle = "FS-17-01",
publisher = "AI Access Foundation",

}

TY - GEN

T1 - Balancing explicability and explanation in human-aware planning

AU - Sreedharan, Sarath

AU - Chakraborti, Tathagata

AU - Kambhampati, Subbarao

PY - 2017/1/1

Y1 - 2017/1/1

N2 - Human aware planning requires an agent to be aware of the intentions, capabilities and mental model of the human in the loop during its decision process. This can involve generating plans that are explicable (Zhang et al. 2017) to a human observer as well as the ability to provide explanations (Chakraborti et al. 2017b) when such plans cannot be generated. This has led to the notion "multi-model planning" which aim to incorporate effects of human expectation in the deliberative process of a planner - either in the form of explicable task planning or explanations produced thereof. In this paper, we bring these two concepts together and show how a planner can account for both these needs and achieve a tradeoff during the plan generation process itself by means of a model-space search method MEGA. This in effect provides a comprehensive perspective of what it means for a decision making agent to be "human-aware" by bringing together existing principles of planning under the umbrella of a single plan generation process. We situate our discussion specifically keeping in mind the recent work on explicable planning and explanation generation, and illustrate these concepts in modified versions of two well known planning domains, as well as a demonstration on a robot involved in a typical search and reconnaissance task with an external supervisor.

AB - Human aware planning requires an agent to be aware of the intentions, capabilities and mental model of the human in the loop during its decision process. This can involve generating plans that are explicable (Zhang et al. 2017) to a human observer as well as the ability to provide explanations (Chakraborti et al. 2017b) when such plans cannot be generated. This has led to the notion "multi-model planning" which aim to incorporate effects of human expectation in the deliberative process of a planner - either in the form of explicable task planning or explanations produced thereof. In this paper, we bring these two concepts together and show how a planner can account for both these needs and achieve a tradeoff during the plan generation process itself by means of a model-space search method MEGA. This in effect provides a comprehensive perspective of what it means for a decision making agent to be "human-aware" by bringing together existing principles of planning under the umbrella of a single plan generation process. We situate our discussion specifically keeping in mind the recent work on explicable planning and explanation generation, and illustrate these concepts in modified versions of two well known planning domains, as well as a demonstration on a robot involved in a typical search and reconnaissance task with an external supervisor.

UR - http://www.scopus.com/inward/record.url?scp=85044453800&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85044453800&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:85044453800

VL - FS-17-01 - FS-17-05

SP - 61

EP - 68

BT - FS-17-01

PB - AI Access Foundation

ER -