TY - JOUR
T1 - Foundations of explanations as model reconciliation
AU - Sreedharan, Sarath
AU - Chakraborti, Tathagata
AU - Kambhampati, Subbarao
N1 - Funding Information:
Kambhampati's research is supported in part by ONR grants N00014-16-1-2892 , N00014-18-1-2442 , N00014-18-1-2840 , N00014-19-1-2119 , AFOSR grant FA9550-18-1-0067 , DARPA SAIL-ON grant W911NF-19-2-0006 , NSF grants 1936997 (C-ACCEL), 1844325 , a NASA grant NNX17AD06G , and a JP Morgan AI Faculty Research grant. Chakraborti was also supported by the IBM Ph.D. Fellowship during the formative years of the project.
Publisher Copyright:
© 2021 Elsevier B.V.
PY - 2021/12
Y1 - 2021/12
N2 - Past work on plan explanations primarily involved the AI system explaining the correctness of its plan and the rationale for its decision in terms of its own model. Such soliloquy is wholly inadequate in most realistic scenarios where users have domain and task models that differ from that used by the AI system. We posit that the explanations are best studied in light of these differing models. In particular, we show how explanation can be seen as a “model reconciliation problem” (MRP), where the AI system in effect suggests changes to the user's mental model so as to make its plan be optimal with respect to that changed user model. We will study the properties of such explanations, present algorithms for automatically computing them, discuss relevant extensions to the basic framework, and evaluate the performance of the proposed algorithms both empirically and through controlled user studies.
AB - Past work on plan explanations primarily involved the AI system explaining the correctness of its plan and the rationale for its decision in terms of its own model. Such soliloquy is wholly inadequate in most realistic scenarios where users have domain and task models that differ from that used by the AI system. We posit that the explanations are best studied in light of these differing models. In particular, we show how explanation can be seen as a “model reconciliation problem” (MRP), where the AI system in effect suggests changes to the user's mental model so as to make its plan be optimal with respect to that changed user model. We will study the properties of such explanations, present algorithms for automatically computing them, discuss relevant extensions to the basic framework, and evaluate the performance of the proposed algorithms both empirically and through controlled user studies.
KW - Automated planning
KW - Explainable AI
KW - Mental models
UR - http://www.scopus.com/inward/record.url?scp=85113197233&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85113197233&partnerID=8YFLogxK
U2 - 10.1016/j.artint.2021.103558
DO - 10.1016/j.artint.2021.103558
M3 - Article
AN - SCOPUS:85113197233
VL - 301
JO - Artificial Intelligence
JF - Artificial Intelligence
SN - 0004-3702
M1 - 103558
ER -