Balancing explicability and explanation in human-aware planning

Sarath Sreedharan, Tathagata Chakraborti, Subbarao Kambhampati

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Scopus citations

Abstract

Human aware planning requires an agent to be aware of the intentions, capabilities and mental model of the human in the loop during its decision process. This can involve generating plans that are explicable (Zhang et al. 2017) to a human observer as well as the ability to provide explanations (Chakraborti et al. 2017b) when such plans cannot be generated. This has led to the notion "multi-model planning" which aim to incorporate effects of human expectation in the deliberative process of a planner - either in the form of explicable task planning or explanations produced thereof. In this paper, we bring these two concepts together and show how a planner can account for both these needs and achieve a tradeoff during the plan generation process itself by means of a model-space search method MEGA. This in effect provides a comprehensive perspective of what it means for a decision making agent to be "human-aware" by bringing together existing principles of planning under the umbrella of a single plan generation process. We situate our discussion specifically keeping in mind the recent work on explicable planning and explanation generation, and illustrate these concepts in modified versions of two well known planning domains, as well as a demonstration on a robot involved in a typical search and reconnaissance task with an external supervisor.

Original languageEnglish (US)
Title of host publicationFS-17-01
Subtitle of host publicationArtificial Intelligence for Human-Robot Interaction; FS-17-02: Cognitive Assistance in Government and Public Sector Applications; FS-17-03: Deep Models and Artificial Intelligence for Military Applications: Potentials, Theories, Practices, Tools and Risks; FS-17-04: Human-Agent Groups: Studies, Algorithms and Challenges; FS-17-05: A Standard Model of the Mind
PublisherAI Access Foundation
Pages61-68
Number of pages8
VolumeFS-17-01 - FS-17-05
ISBN (Electronic)9781577357940
StatePublished - Jan 1 2017
Event2017 AAAI Fall Symposium - Arlington, United States
Duration: Nov 9 2017Nov 11 2017

Other

Other2017 AAAI Fall Symposium
CountryUnited States
CityArlington
Period11/9/1711/11/17

ASJC Scopus subject areas

  • Engineering(all)

Fingerprint Dive into the research topics of 'Balancing explicability and explanation in human-aware planning'. Together they form a unique fingerprint.

  • Cite this

    Sreedharan, S., Chakraborti, T., & Kambhampati, S. (2017). Balancing explicability and explanation in human-aware planning. In FS-17-01: Artificial Intelligence for Human-Robot Interaction; FS-17-02: Cognitive Assistance in Government and Public Sector Applications; FS-17-03: Deep Models and Artificial Intelligence for Military Applications: Potentials, Theories, Practices, Tools and Risks; FS-17-04: Human-Agent Groups: Studies, Algorithms and Challenges; FS-17-05: A Standard Model of the Mind (Vol. FS-17-01 - FS-17-05, pp. 61-68). AI Access Foundation.