TY - GEN
T1 - Plan Explanations as Model Reconciliation
AU - Chakraborti, Tathagata
AU - Sreedharan, Sarath
AU - Grover, Sachin
AU - Kambhampati, Subbarao
N1 - Funding Information:
This research is supported in part by the AFOSR grant FA9550-18-1-0067, the ONR grants N00014161-2892, N00014-13-1-0176, N00014-13-1-0519, N00014-15-1-2027, and the NASA grant NNX17AD06G.
Publisher Copyright:
© 2019 IEEE.
PY - 2019/3/22
Y1 - 2019/3/22
N2 - Recent work in explanation generation for decision making agents has looked at how unexplained behavior of autonomous systems can be understood in terms of differences in the model of the system and the human's understanding of the same, and how the explanation process as a result of this mismatch can be then seen as a process of reconciliation of these models. Existing algorithms in such settings, while having been built on contrastive, selective and social properties of explanations as studied extensively in the psychology literature, have not, to the best of our knowledge, been evaluated in settings with actual humans in the loop. As such, the applicability of such explanations to human-AI and human-robot interactions remains suspect. In this paper, we set out to evaluate these explanation generation algorithms in a series of studies in a mock search and rescue scenario with an internal semi-autonomous robot and an external human commander. During that process, we hope to demonstrate to what extent the properties of these algorithms hold as they are evaluated by humans.
AB - Recent work in explanation generation for decision making agents has looked at how unexplained behavior of autonomous systems can be understood in terms of differences in the model of the system and the human's understanding of the same, and how the explanation process as a result of this mismatch can be then seen as a process of reconciliation of these models. Existing algorithms in such settings, while having been built on contrastive, selective and social properties of explanations as studied extensively in the psychology literature, have not, to the best of our knowledge, been evaluated in settings with actual humans in the loop. As such, the applicability of such explanations to human-AI and human-robot interactions remains suspect. In this paper, we set out to evaluate these explanation generation algorithms in a series of studies in a mock search and rescue scenario with an internal semi-autonomous robot and an external human commander. During that process, we hope to demonstrate to what extent the properties of these algorithms hold as they are evaluated by humans.
KW - Explainable AI
KW - explanations as model reconciliation
KW - human-robot interaction
KW - planning and decision-making
UR - http://www.scopus.com/inward/record.url?scp=85063984750&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85063984750&partnerID=8YFLogxK
U2 - 10.1109/HRI.2019.8673193
DO - 10.1109/HRI.2019.8673193
M3 - Conference contribution
AN - SCOPUS:85063984750
T3 - ACM/IEEE International Conference on Human-Robot Interaction
SP - 258
EP - 266
BT - HRI 2019 - 14th ACM/IEEE International Conference on Human-Robot Interaction
PB - IEEE Computer Society
T2 - 14th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2019
Y2 - 11 March 2019 through 14 March 2019
ER -