TY - CONF
T1 - BRIDGING THE GAP
T2 - 10th International Conference on Learning Representations, ICLR 2022
AU - Sreedharan, Sarath
AU - Soni, Utkarsh
AU - Verma, Mudit
AU - Srivastava, Siddharth
AU - Kambhampati, Subbarao
N1 - Funding Information:
This research is supported in part by ONR grants N00014-16-1-2892, N00014-18-1-2442, N00014-18-1-2840, N00014-9-1-2119, AFOSR grant FA9550-18-1-0067, NSF 1909370, DARPA SAIL-ON grant W911NF19-2-0006 and a JP Morgan AI Faculty Research grant. We thank Been Kim, the reviewers and the members of the Yochan research group for helpful discussions and feedback.
Publisher Copyright:
© 2022 ICLR 2022 - 10th International Conference on Learning Representationss. All rights reserved.
PY - 2022
Y1 - 2022
N2 - As increasingly complex AI systems are introduced into our daily lives, it becomes important for such systems to be capable of explaining the rationale for their decisions and allowing users to contest these decisions. A significant hurdle to allowing for such explanatory dialogue could be the vocabulary mismatch between the user and the AI system. This paper introduces methods for providing contrastive explanations in terms of user-specified concepts for sequential decision-making settings where the system's model of the task may be best represented as an inscrutable model. We do this by building partial symbolic models of a local approximation of the task that can be leveraged to answer the user queries. We test these methods on a popular Atari game (Montezuma's Revenge) and variants of Sokoban (a well-known planning benchmark) and report the results of user studies to evaluate whether people find explanations generated in this form useful.
AB - As increasingly complex AI systems are introduced into our daily lives, it becomes important for such systems to be capable of explaining the rationale for their decisions and allowing users to contest these decisions. A significant hurdle to allowing for such explanatory dialogue could be the vocabulary mismatch between the user and the AI system. This paper introduces methods for providing contrastive explanations in terms of user-specified concepts for sequential decision-making settings where the system's model of the task may be best represented as an inscrutable model. We do this by building partial symbolic models of a local approximation of the task that can be leveraged to answer the user queries. We test these methods on a popular Atari game (Montezuma's Revenge) and variants of Sokoban (a well-known planning benchmark) and report the results of user studies to evaluate whether people find explanations generated in this form useful.
UR - http://www.scopus.com/inward/record.url?scp=85140395279&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85140395279&partnerID=8YFLogxK
M3 - Paper
AN - SCOPUS:85140395279
Y2 - 25 April 2022 through 29 April 2022
ER -