TY - JOUR
T1 - Using state abstractions to compute personalized contrastive explanations for AI agent behavior
AU - Sreedharan, Sarath
AU - Srivastava, Siddharth
AU - Kambhampati, Subbarao
N1 - Funding Information:
This research is supported in part by ONR grants N00014-16-1-2892 , N00014-18-1-2442 , N00014-18-1-2840 , N00014-9-1-2119 , AFOSR grant FA9550-18-1-0067 , DARPA SAIL-ON grant W911NF-19-2-0006 , NSF grants 1936997 (C-ACCEL) , 1844325 and 1909370 , NASA grant NNX17AD06G , and a JP Morgan AI Faculty Research grant.
Publisher Copyright:
© 2021 Elsevier B.V.
PY - 2021/12
Y1 - 2021/12
N2 - There is a growing interest within the AI research community in developing autonomous systems capable of explaining their behavior to users. However, the problem of computing explanations for users of different levels of expertise has received little research attention. We propose an approach for addressing this problem by representing the user's understanding of the task as an abstraction of the domain model that the planner uses. We present algorithms for generating minimal explanations in cases where this abstract human model is not known. We reduce the problem of generating an explanation to a search over the space of abstract models and show that while the complete problem is NP-hard, a greedy algorithm can provide good approximations of the optimal solution. We empirically show that our approach can efficiently compute explanations for a variety of problems and also perform user studies to test the utility of state abstractions in explanations.
AB - There is a growing interest within the AI research community in developing autonomous systems capable of explaining their behavior to users. However, the problem of computing explanations for users of different levels of expertise has received little research attention. We propose an approach for addressing this problem by representing the user's understanding of the task as an abstraction of the domain model that the planner uses. We present algorithms for generating minimal explanations in cases where this abstract human model is not known. We reduce the problem of generating an explanation to a search over the space of abstract models and show that while the complete problem is NP-hard, a greedy algorithm can provide good approximations of the optimal solution. We empirically show that our approach can efficiently compute explanations for a variety of problems and also perform user studies to test the utility of state abstractions in explanations.
KW - Abstractions
KW - Contrastive explanations
KW - Explanations for plans
UR - http://www.scopus.com/inward/record.url?scp=85112486554&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85112486554&partnerID=8YFLogxK
U2 - 10.1016/j.artint.2021.103570
DO - 10.1016/j.artint.2021.103570
M3 - Article
AN - SCOPUS:85112486554
SN - 0004-3702
VL - 301
JO - Artificial Intelligence
JF - Artificial Intelligence
M1 - 103570
ER -