TY - GEN
T1 - Active explicable planning for human-robot teaming
AU - Hanni, Akkamahadevi
AU - Zhang, Yu
N1 - Funding Information:
We thank the anonymous reviewers for their helpful comments. This research is supported in part by the NSF grant IIS-1844524, the NASA grant NNX17AD06G, and the AFOSR grant FA9550-18-1-0067.
Publisher Copyright:
© 2021 ACM.
PY - 2021/3/8
Y1 - 2021/3/8
N2 - Intelligent robots are redefining autonomous tasks but are still far from being fully capable of assisting humans in day to day tasks. An important requirement of collaboration is to have a clear understanding of each other's expectations and capabilities. Lack of which may lead to serious issues such as loose coordination between teammates, ineffective team performance, and ultimately mission failures. Hence, it is important for the robot to behave explicably to make themselves understandable to the human. One of the challenges here is that the expectations of the human are often hidden and dynamically changing as the human interacts with the robot. Existing approaches in plan explicability often assume the human's expectations are known and static. In this paper, we propose the idea of active explicable planning to address this issue. We apply a Bayesian approach to model and predict dynamic human beliefs to be more anticipatory, and hence can generate more efficient plans without impacting explicability. We hypothesize that active explicable plans can be more efficient and more explicable at the same time, compared to the plans generated by existing methods. From the preliminary results of Mturk study, we find that our approach effectively captures the dynamic belief of the human which can be used to generate efficient and explicable behavior that benefits from dynamically changing expectations.
AB - Intelligent robots are redefining autonomous tasks but are still far from being fully capable of assisting humans in day to day tasks. An important requirement of collaboration is to have a clear understanding of each other's expectations and capabilities. Lack of which may lead to serious issues such as loose coordination between teammates, ineffective team performance, and ultimately mission failures. Hence, it is important for the robot to behave explicably to make themselves understandable to the human. One of the challenges here is that the expectations of the human are often hidden and dynamically changing as the human interacts with the robot. Existing approaches in plan explicability often assume the human's expectations are known and static. In this paper, we propose the idea of active explicable planning to address this issue. We apply a Bayesian approach to model and predict dynamic human beliefs to be more anticipatory, and hence can generate more efficient plans without impacting explicability. We hypothesize that active explicable plans can be more efficient and more explicable at the same time, compared to the plans generated by existing methods. From the preliminary results of Mturk study, we find that our approach effectively captures the dynamic belief of the human which can be used to generate efficient and explicable behavior that benefits from dynamically changing expectations.
UR - http://www.scopus.com/inward/record.url?scp=85102772695&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85102772695&partnerID=8YFLogxK
U2 - 10.1145/3434074.3447154
DO - 10.1145/3434074.3447154
M3 - Conference contribution
AN - SCOPUS:85102772695
T3 - ACM/IEEE International Conference on Human-Robot Interaction
SP - 176
EP - 180
BT - HRI 2021 - Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction
PB - IEEE Computer Society
T2 - 2021 ACM/IEEE International Conference on Human-Robot Interaction, HRI 2021
Y2 - 8 March 2021 through 11 March 2021
ER -