TY - GEN
T1 - Hierarchical expertise level modeling for user specific contrastive explanations
AU - Sreedharan, Sarath
AU - Srivastava, Siddharth
AU - Kambhampati, Subbarao
N1 - Funding Information:
In this paper, we investigated the problem of generating explanations when the explainee understands the task model at a lower levels of abstraction. We looked at how we can use explanations as concretization for such scenarios and proposed algorithms for generating minimal explanations. One unique aspect of our approach is the use of foils as a way of capturing human confusion about the problem. This not We thank Dan Weld for helpful comments on a previous draft. This research is supported in part by the AFOSR grant FA9550-18-1-0067, ONR grants N00014161-2892, N00014-13-1-0176, N00014-13-1-0519, N00014-15-1-2027, and the NASA grant NNX17AD06G.
Publisher Copyright:
© 2018 International Joint Conferences on Artificial Intelligence.All right reserved.
PY - 2018
Y1 - 2018
N2 - There is a growing interest within the AI research community in developing autonomous systems capable of explaining their behavior to users. However, the problem of computing explanations for users of different levels of expertise has received little research attention. We propose an approach for addressing this problem by representing the user's understanding of the task as an abstraction of the domain model that the planner uses. We present algorithms for generating minimal explanations in cases where this abstract human model is not known. We reduce the problem of generating an explanation to a search over the space of abstract models and show that while the complete problem is NP-hard, a greedy algorithm can provide good approximations of the optimal solution. We also empirically show that our approach can efficiently compute explanations for a variety of problems.
AB - There is a growing interest within the AI research community in developing autonomous systems capable of explaining their behavior to users. However, the problem of computing explanations for users of different levels of expertise has received little research attention. We propose an approach for addressing this problem by representing the user's understanding of the task as an abstraction of the domain model that the planner uses. We present algorithms for generating minimal explanations in cases where this abstract human model is not known. We reduce the problem of generating an explanation to a search over the space of abstract models and show that while the complete problem is NP-hard, a greedy algorithm can provide good approximations of the optimal solution. We also empirically show that our approach can efficiently compute explanations for a variety of problems.
UR - http://www.scopus.com/inward/record.url?scp=85055693745&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85055693745&partnerID=8YFLogxK
U2 - 10.24963/ijcai.2018/671
DO - 10.24963/ijcai.2018/671
M3 - Conference contribution
AN - SCOPUS:85055693745
T3 - IJCAI International Joint Conference on Artificial Intelligence
SP - 4829
EP - 4836
BT - Proceedings of the 27th International Joint Conference on Artificial Intelligence, IJCAI 2018
A2 - Lang, Jerome
PB - International Joint Conferences on Artificial Intelligence
T2 - 27th International Joint Conference on Artificial Intelligence, IJCAI 2018
Y2 - 13 July 2018 through 19 July 2018
ER -