Towards Understanding User Preferences for Explanation Types in Model Reconciliation

Zahra Zahedi, Alberto Olmo, Tathagata Chakraborti, Sarath Sreedharan, Subbarao Kambhampati

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Recent work has formalized the explanation process in the context of automated planning as one of model reconciliation - i.e. a process by which the planning agent can bring the explainee's (possibly faulty) model of a planning problem closer to its understanding of the ground truth until both agree that its plan is the best possible. The content of explanations can thus range from misunderstandings about the agent's beliefs (state), desires (goals) and capabilities (action model). Though existing literature has considered different kinds of these model differences to be equivalent, literature on the explanations in social sciences has suggested that explanations with similar logical properties may often be perceived differently by humans. In this brief report, we explore to what extent humans attribute importance to different kinds of model differences that have been traditionally considered equivalent in the model reconciliation setting. Our results suggest that people prefer the explanations which are related to the effects of actions.

Original languageEnglish (US)
Title of host publicationHRI 2019 - 14th ACM/IEEE International Conference on Human-Robot Interaction
PublisherIEEE Computer Society
Pages648-649
Number of pages2
ISBN (Electronic)9781538685556
DOIs
StatePublished - Mar 22 2019
Event14th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2019 - Daegu, Korea, Republic of
Duration: Mar 11 2019Mar 14 2019

Publication series

NameACM/IEEE International Conference on Human-Robot Interaction
Volume2019-March
ISSN (Electronic)2167-2148

Conference

Conference14th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2019
CountryKorea, Republic of
CityDaegu
Period3/11/193/14/19

Fingerprint

Planning
Social sciences

ASJC Scopus subject areas

  • Artificial Intelligence
  • Human-Computer Interaction
  • Electrical and Electronic Engineering

Cite this

Zahedi, Z., Olmo, A., Chakraborti, T., Sreedharan, S., & Kambhampati, S. (2019). Towards Understanding User Preferences for Explanation Types in Model Reconciliation. In HRI 2019 - 14th ACM/IEEE International Conference on Human-Robot Interaction (pp. 648-649). [8673097] (ACM/IEEE International Conference on Human-Robot Interaction; Vol. 2019-March). IEEE Computer Society. https://doi.org/10.1109/HRI.2019.8673097

Towards Understanding User Preferences for Explanation Types in Model Reconciliation. / Zahedi, Zahra; Olmo, Alberto; Chakraborti, Tathagata; Sreedharan, Sarath; Kambhampati, Subbarao.

HRI 2019 - 14th ACM/IEEE International Conference on Human-Robot Interaction. IEEE Computer Society, 2019. p. 648-649 8673097 (ACM/IEEE International Conference on Human-Robot Interaction; Vol. 2019-March).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Zahedi, Z, Olmo, A, Chakraborti, T, Sreedharan, S & Kambhampati, S 2019, Towards Understanding User Preferences for Explanation Types in Model Reconciliation. in HRI 2019 - 14th ACM/IEEE International Conference on Human-Robot Interaction., 8673097, ACM/IEEE International Conference on Human-Robot Interaction, vol. 2019-March, IEEE Computer Society, pp. 648-649, 14th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2019, Daegu, Korea, Republic of, 3/11/19. https://doi.org/10.1109/HRI.2019.8673097
Zahedi Z, Olmo A, Chakraborti T, Sreedharan S, Kambhampati S. Towards Understanding User Preferences for Explanation Types in Model Reconciliation. In HRI 2019 - 14th ACM/IEEE International Conference on Human-Robot Interaction. IEEE Computer Society. 2019. p. 648-649. 8673097. (ACM/IEEE International Conference on Human-Robot Interaction). https://doi.org/10.1109/HRI.2019.8673097
Zahedi, Zahra ; Olmo, Alberto ; Chakraborti, Tathagata ; Sreedharan, Sarath ; Kambhampati, Subbarao. / Towards Understanding User Preferences for Explanation Types in Model Reconciliation. HRI 2019 - 14th ACM/IEEE International Conference on Human-Robot Interaction. IEEE Computer Society, 2019. pp. 648-649 (ACM/IEEE International Conference on Human-Robot Interaction).
@inproceedings{5df4408ec572427f90b51614b2b4e459,
title = "Towards Understanding User Preferences for Explanation Types in Model Reconciliation",
abstract = "Recent work has formalized the explanation process in the context of automated planning as one of model reconciliation - i.e. a process by which the planning agent can bring the explainee's (possibly faulty) model of a planning problem closer to its understanding of the ground truth until both agree that its plan is the best possible. The content of explanations can thus range from misunderstandings about the agent's beliefs (state), desires (goals) and capabilities (action model). Though existing literature has considered different kinds of these model differences to be equivalent, literature on the explanations in social sciences has suggested that explanations with similar logical properties may often be perceived differently by humans. In this brief report, we explore to what extent humans attribute importance to different kinds of model differences that have been traditionally considered equivalent in the model reconciliation setting. Our results suggest that people prefer the explanations which are related to the effects of actions.",
author = "Zahra Zahedi and Alberto Olmo and Tathagata Chakraborti and Sarath Sreedharan and Subbarao Kambhampati",
year = "2019",
month = "3",
day = "22",
doi = "10.1109/HRI.2019.8673097",
language = "English (US)",
series = "ACM/IEEE International Conference on Human-Robot Interaction",
publisher = "IEEE Computer Society",
pages = "648--649",
booktitle = "HRI 2019 - 14th ACM/IEEE International Conference on Human-Robot Interaction",

}

TY - GEN

T1 - Towards Understanding User Preferences for Explanation Types in Model Reconciliation

AU - Zahedi, Zahra

AU - Olmo, Alberto

AU - Chakraborti, Tathagata

AU - Sreedharan, Sarath

AU - Kambhampati, Subbarao

PY - 2019/3/22

Y1 - 2019/3/22

N2 - Recent work has formalized the explanation process in the context of automated planning as one of model reconciliation - i.e. a process by which the planning agent can bring the explainee's (possibly faulty) model of a planning problem closer to its understanding of the ground truth until both agree that its plan is the best possible. The content of explanations can thus range from misunderstandings about the agent's beliefs (state), desires (goals) and capabilities (action model). Though existing literature has considered different kinds of these model differences to be equivalent, literature on the explanations in social sciences has suggested that explanations with similar logical properties may often be perceived differently by humans. In this brief report, we explore to what extent humans attribute importance to different kinds of model differences that have been traditionally considered equivalent in the model reconciliation setting. Our results suggest that people prefer the explanations which are related to the effects of actions.

AB - Recent work has formalized the explanation process in the context of automated planning as one of model reconciliation - i.e. a process by which the planning agent can bring the explainee's (possibly faulty) model of a planning problem closer to its understanding of the ground truth until both agree that its plan is the best possible. The content of explanations can thus range from misunderstandings about the agent's beliefs (state), desires (goals) and capabilities (action model). Though existing literature has considered different kinds of these model differences to be equivalent, literature on the explanations in social sciences has suggested that explanations with similar logical properties may often be perceived differently by humans. In this brief report, we explore to what extent humans attribute importance to different kinds of model differences that have been traditionally considered equivalent in the model reconciliation setting. Our results suggest that people prefer the explanations which are related to the effects of actions.

UR - http://www.scopus.com/inward/record.url?scp=85064015308&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85064015308&partnerID=8YFLogxK

U2 - 10.1109/HRI.2019.8673097

DO - 10.1109/HRI.2019.8673097

M3 - Conference contribution

AN - SCOPUS:85064015308

T3 - ACM/IEEE International Conference on Human-Robot Interaction

SP - 648

EP - 649

BT - HRI 2019 - 14th ACM/IEEE International Conference on Human-Robot Interaction

PB - IEEE Computer Society

ER -