TY - GEN
T1 - Online explanation generation for planning tasks in human-robot teaming
AU - Zakershahrak, Mehrdad
AU - Gong, Ze
AU - Sadassivam, Nikhillesh
AU - Zhang, Yu
N1 - Funding Information:
We thank the anonymous reviewers for their helpful comments. This research is supported in part by the NSF grant IIS-1844524, the NASA grant NNX17AD06G, and the AFOSR grant FA9550-18-1-0067
Publisher Copyright:
© 2020 IEEE.
PY - 2020/10/24
Y1 - 2020/10/24
N2 - As AI becomes an integral part of our lives, the development of explainable AI, embodied in the decision-making process of an AI or robotic agent, becomes imperative. For a robotic teammate, the ability to generate explanations to justify its behavior is one of the key requirements of explainable agency. Prior work on explanation generation has been focused on supporting the rationale behind the robot's decision or behavior. These approaches, however, fail to consider the mental demand for understanding the received explanation. In other words, the human teammate is expected to understand an explanation no matter how much information is presented. In this work, we argue that explanations, especially those of a complex nature, should be made in an online fashion during the execution, which helps spread out the information to be explained and thus reduce the mental workload of humans in highly cognitive demanding tasks. However, a challenge here is that the different parts of an explanation may be dependent on each other, which must be taken into account when generating online explanations. To this end, a general formulation of online explanation generation is presented with three variations satisfying different "online"properties. The new explanation generation methods are based on a model reconciliation setting introduced in our prior work. We evaluated our methods both with human subjects in a simulated rover domain, using NASA Task Load Index (TLX), and synthetically with ten different problems across two standard IPC domains. Results strongly suggest that our methods generate explanations that are perceived as less cognitively demanding and much preferred over the baselines and are computationally efficient.
AB - As AI becomes an integral part of our lives, the development of explainable AI, embodied in the decision-making process of an AI or robotic agent, becomes imperative. For a robotic teammate, the ability to generate explanations to justify its behavior is one of the key requirements of explainable agency. Prior work on explanation generation has been focused on supporting the rationale behind the robot's decision or behavior. These approaches, however, fail to consider the mental demand for understanding the received explanation. In other words, the human teammate is expected to understand an explanation no matter how much information is presented. In this work, we argue that explanations, especially those of a complex nature, should be made in an online fashion during the execution, which helps spread out the information to be explained and thus reduce the mental workload of humans in highly cognitive demanding tasks. However, a challenge here is that the different parts of an explanation may be dependent on each other, which must be taken into account when generating online explanations. To this end, a general formulation of online explanation generation is presented with three variations satisfying different "online"properties. The new explanation generation methods are based on a model reconciliation setting introduced in our prior work. We evaluated our methods both with human subjects in a simulated rover domain, using NASA Task Load Index (TLX), and synthetically with ten different problems across two standard IPC domains. Results strongly suggest that our methods generate explanations that are perceived as less cognitively demanding and much preferred over the baselines and are computationally efficient.
UR - http://www.scopus.com/inward/record.url?scp=85102395924&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85102395924&partnerID=8YFLogxK
U2 - 10.1109/IROS45743.2020.9341792
DO - 10.1109/IROS45743.2020.9341792
M3 - Conference contribution
AN - SCOPUS:85102395924
T3 - IEEE International Conference on Intelligent Robots and Systems
SP - 6304
EP - 6310
BT - 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2020
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2020
Y2 - 24 October 2020 through 24 January 2021
ER -