Abstract

Recent work in explanation generation for decision making agents has looked at how unexplained behavior of autonomous systems can be understood in terms of differences in the model of the system and the human's understanding of the same, and how the explanation process as a result of this mismatch can be then seen as a process of reconciliation of these models. Existing algorithms in such settings, while having been built on contrastive, selective and social properties of explanations as studied extensively in the psychology literature, have not, to the best of our knowledge, been evaluated in settings with actual humans in the loop. As such, the applicability of such explanations to human-AI and human-robot interactions remains suspect. In this paper, we set out to evaluate these explanation generation algorithms in a series of studies in a mock search and rescue scenario with an internal semi-autonomous robot and an external human commander. During that process, we hope to demonstrate to what extent the properties of these algorithms hold as they are evaluated by humans.

Original languageEnglish (US)
Title of host publicationHRI 2019 - 14th ACM/IEEE International Conference on Human-Robot Interaction
PublisherIEEE Computer Society
Pages258-266
Number of pages9
ISBN (Electronic)9781538685556
DOIs
StatePublished - Mar 22 2019
Event14th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2019 - Daegu, Korea, Republic of
Duration: Mar 11 2019Mar 14 2019

Publication series

NameACM/IEEE International Conference on Human-Robot Interaction
Volume2019-March
ISSN (Electronic)2167-2148

Conference

Conference14th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2019
CountryKorea, Republic of
CityDaegu
Period3/11/193/14/19

Fingerprint

Human robot interaction
Decision making
Robots

Keywords

  • Explainable AI
  • explanations as model reconciliation
  • human-robot interaction
  • planning and decision-making

ASJC Scopus subject areas

  • Artificial Intelligence
  • Human-Computer Interaction
  • Electrical and Electronic Engineering

Cite this

Chakraborti, T., Sreedharan, S., Grover, S., & Kambhampati, S. (2019). Plan Explanations as Model Reconciliation. In HRI 2019 - 14th ACM/IEEE International Conference on Human-Robot Interaction (pp. 258-266). [8673193] (ACM/IEEE International Conference on Human-Robot Interaction; Vol. 2019-March). IEEE Computer Society. https://doi.org/10.1109/HRI.2019.8673193

Plan Explanations as Model Reconciliation. / Chakraborti, Tathagata; Sreedharan, Sarath; Grover, Sachin; Kambhampati, Subbarao.

HRI 2019 - 14th ACM/IEEE International Conference on Human-Robot Interaction. IEEE Computer Society, 2019. p. 258-266 8673193 (ACM/IEEE International Conference on Human-Robot Interaction; Vol. 2019-March).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Chakraborti, T, Sreedharan, S, Grover, S & Kambhampati, S 2019, Plan Explanations as Model Reconciliation. in HRI 2019 - 14th ACM/IEEE International Conference on Human-Robot Interaction., 8673193, ACM/IEEE International Conference on Human-Robot Interaction, vol. 2019-March, IEEE Computer Society, pp. 258-266, 14th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2019, Daegu, Korea, Republic of, 3/11/19. https://doi.org/10.1109/HRI.2019.8673193
Chakraborti T, Sreedharan S, Grover S, Kambhampati S. Plan Explanations as Model Reconciliation. In HRI 2019 - 14th ACM/IEEE International Conference on Human-Robot Interaction. IEEE Computer Society. 2019. p. 258-266. 8673193. (ACM/IEEE International Conference on Human-Robot Interaction). https://doi.org/10.1109/HRI.2019.8673193
Chakraborti, Tathagata ; Sreedharan, Sarath ; Grover, Sachin ; Kambhampati, Subbarao. / Plan Explanations as Model Reconciliation. HRI 2019 - 14th ACM/IEEE International Conference on Human-Robot Interaction. IEEE Computer Society, 2019. pp. 258-266 (ACM/IEEE International Conference on Human-Robot Interaction).
@inproceedings{dd47c0f905ac402da99abafda51e9251,
title = "Plan Explanations as Model Reconciliation",
abstract = "Recent work in explanation generation for decision making agents has looked at how unexplained behavior of autonomous systems can be understood in terms of differences in the model of the system and the human's understanding of the same, and how the explanation process as a result of this mismatch can be then seen as a process of reconciliation of these models. Existing algorithms in such settings, while having been built on contrastive, selective and social properties of explanations as studied extensively in the psychology literature, have not, to the best of our knowledge, been evaluated in settings with actual humans in the loop. As such, the applicability of such explanations to human-AI and human-robot interactions remains suspect. In this paper, we set out to evaluate these explanation generation algorithms in a series of studies in a mock search and rescue scenario with an internal semi-autonomous robot and an external human commander. During that process, we hope to demonstrate to what extent the properties of these algorithms hold as they are evaluated by humans.",
keywords = "Explainable AI, explanations as model reconciliation, human-robot interaction, planning and decision-making",
author = "Tathagata Chakraborti and Sarath Sreedharan and Sachin Grover and Subbarao Kambhampati",
year = "2019",
month = "3",
day = "22",
doi = "10.1109/HRI.2019.8673193",
language = "English (US)",
series = "ACM/IEEE International Conference on Human-Robot Interaction",
publisher = "IEEE Computer Society",
pages = "258--266",
booktitle = "HRI 2019 - 14th ACM/IEEE International Conference on Human-Robot Interaction",

}

TY - GEN

T1 - Plan Explanations as Model Reconciliation

AU - Chakraborti, Tathagata

AU - Sreedharan, Sarath

AU - Grover, Sachin

AU - Kambhampati, Subbarao

PY - 2019/3/22

Y1 - 2019/3/22

N2 - Recent work in explanation generation for decision making agents has looked at how unexplained behavior of autonomous systems can be understood in terms of differences in the model of the system and the human's understanding of the same, and how the explanation process as a result of this mismatch can be then seen as a process of reconciliation of these models. Existing algorithms in such settings, while having been built on contrastive, selective and social properties of explanations as studied extensively in the psychology literature, have not, to the best of our knowledge, been evaluated in settings with actual humans in the loop. As such, the applicability of such explanations to human-AI and human-robot interactions remains suspect. In this paper, we set out to evaluate these explanation generation algorithms in a series of studies in a mock search and rescue scenario with an internal semi-autonomous robot and an external human commander. During that process, we hope to demonstrate to what extent the properties of these algorithms hold as they are evaluated by humans.

AB - Recent work in explanation generation for decision making agents has looked at how unexplained behavior of autonomous systems can be understood in terms of differences in the model of the system and the human's understanding of the same, and how the explanation process as a result of this mismatch can be then seen as a process of reconciliation of these models. Existing algorithms in such settings, while having been built on contrastive, selective and social properties of explanations as studied extensively in the psychology literature, have not, to the best of our knowledge, been evaluated in settings with actual humans in the loop. As such, the applicability of such explanations to human-AI and human-robot interactions remains suspect. In this paper, we set out to evaluate these explanation generation algorithms in a series of studies in a mock search and rescue scenario with an internal semi-autonomous robot and an external human commander. During that process, we hope to demonstrate to what extent the properties of these algorithms hold as they are evaluated by humans.

KW - Explainable AI

KW - explanations as model reconciliation

KW - human-robot interaction

KW - planning and decision-making

UR - http://www.scopus.com/inward/record.url?scp=85063984750&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85063984750&partnerID=8YFLogxK

U2 - 10.1109/HRI.2019.8673193

DO - 10.1109/HRI.2019.8673193

M3 - Conference contribution

T3 - ACM/IEEE International Conference on Human-Robot Interaction

SP - 258

EP - 266

BT - HRI 2019 - 14th ACM/IEEE International Conference on Human-Robot Interaction

PB - IEEE Computer Society

ER -