TY - JOUR
T1 - Handling model uncertainty and multiplicity in explanations via model reconciliation
AU - Sreedharan, Sarath
AU - Chakraborti, Tathagata
AU - Kambhampati, Subbarao
N1 - Funding Information:
Acknowledgements. This research is supported in part by the AFOSR grant FA9550-18-1-0067, the ONR grants N00014161-2892, N00014-13-1-0176, N00014-13-1-0519, N00014-15-1-2027, and the NASA grant NNX17AD06G. The second author is also supported by the IBM Ph.D. Fellowship from 2016 to 2018.
Publisher Copyright:
Copyright © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
PY - 2018
Y1 - 2018
N2 - Model reconciliation has been proposed as a way for an agent to explain its decisions to a human who may have a different understanding of the same planning problem by explaining its decisions in terms of these model differences. However, often the human’s mental model (and hence the difference) is not known precisely and such explanations cannot be readily computed. In this paper, we show how the explanation generation process evolves in the presence of such model uncertainty or incompleteness by generating conformant explanations that are applicable to a set of possible models. We also show how such explanations can contain superfluous information and how such redundancies can be reduced using conditional explanations to iterate with the human to attain common ground. Finally, we will introduce an anytime version of this approach and empirically demonstrate the trade-offs involved in the different forms of explanations in terms of the computational overhead for the agent and the communication overhead for the human. We illustrate these concepts in three well-known planning domains as well as in a demonstration on a robot involved in a typical search and reconnaissance scenario with an external human supervisor.
AB - Model reconciliation has been proposed as a way for an agent to explain its decisions to a human who may have a different understanding of the same planning problem by explaining its decisions in terms of these model differences. However, often the human’s mental model (and hence the difference) is not known precisely and such explanations cannot be readily computed. In this paper, we show how the explanation generation process evolves in the presence of such model uncertainty or incompleteness by generating conformant explanations that are applicable to a set of possible models. We also show how such explanations can contain superfluous information and how such redundancies can be reduced using conditional explanations to iterate with the human to attain common ground. Finally, we will introduce an anytime version of this approach and empirically demonstrate the trade-offs involved in the different forms of explanations in terms of the computational overhead for the agent and the communication overhead for the human. We illustrate these concepts in three well-known planning domains as well as in a demonstration on a robot involved in a typical search and reconnaissance scenario with an external human supervisor.
UR - http://www.scopus.com/inward/record.url?scp=85054768722&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85054768722&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85054768722
SN - 2334-0835
VL - 2018-June
SP - 518
EP - 526
JO - Proceedings International Conference on Automated Planning and Scheduling, ICAPS
JF - Proceedings International Conference on Automated Planning and Scheduling, ICAPS
T2 - 28th International Conference on Automated Planning and Scheduling, ICAPS 2018
Y2 - 24 June 2018 through 29 June 2018
ER -