TY - GEN
T1 - Plan explicability and predictability for robot task planning
AU - Zhang, Yu
AU - Sreedharan, Sarath
AU - Kulkarni, Anagha
AU - Chakraborti, Tathagata
AU - Zhuo, Hankz Hankui
AU - Kambhampati, Subbarao
N1 - Funding Information:
In future work, we plan to use more general CRFs and other learning approaches (e.g., LSTMs) to improve the performance. We also plan to apply our approach to other interesting applications. For example, many defense applications use planning to create unpredictable and inexplicable plans, which can help deter or confuse enemies and are useful for testing defenses against novel or unexpected attacks. Acknowledgement: This research is supported in part by the ONR grants N00014-16-1-2892, N00014-13-1-0176, N00014-13-1-0519, N00014-15-1-2027, the NASA grant NNX17AD06G, and the NSFC grant U1611262.
Publisher Copyright:
© 2017 IEEE.
PY - 2017/7/21
Y1 - 2017/7/21
N2 - Intelligent robots and machines are becoming pervasive in human populated environments. A desirable capability of these agents is to respond to goal-oriented commands by autonomously constructing task plans. However, such autonomy can add significant cognitive load and potentially introduce safety risks to humans when agents behave in unexpected ways. Hence, for such agents to be helpful, one important requirement is for them to synthesize plans that can be easily understood by humans. While there exists previous work that studied socially acceptable robots that interact with humans in 'natural ways', and work that investigated legible motion planning, there is no general solution for high level task planning. To address this issue, we introduce the notions of plan explicability and predictability. To compute these measures, first, we postulate that humans understand agent plans by associating abstract tasks with agent actions, which can be considered as a labeling process. We learn the labeling scheme of humans for agent plans from training examples using conditional random fields (CRFs). Then, we use the learned model to label a new plan to compute its explicability and predictability. These measures can be used by agents to proactively choose or directly synthesize plans that are more explicable and predictable to humans. We provide evaluations on a synthetic domain and with a physical robot to demonstrate the effectiveness of our approach.
AB - Intelligent robots and machines are becoming pervasive in human populated environments. A desirable capability of these agents is to respond to goal-oriented commands by autonomously constructing task plans. However, such autonomy can add significant cognitive load and potentially introduce safety risks to humans when agents behave in unexpected ways. Hence, for such agents to be helpful, one important requirement is for them to synthesize plans that can be easily understood by humans. While there exists previous work that studied socially acceptable robots that interact with humans in 'natural ways', and work that investigated legible motion planning, there is no general solution for high level task planning. To address this issue, we introduce the notions of plan explicability and predictability. To compute these measures, first, we postulate that humans understand agent plans by associating abstract tasks with agent actions, which can be considered as a labeling process. We learn the labeling scheme of humans for agent plans from training examples using conditional random fields (CRFs). Then, we use the learned model to label a new plan to compute its explicability and predictability. These measures can be used by agents to proactively choose or directly synthesize plans that are more explicable and predictable to humans. We provide evaluations on a synthetic domain and with a physical robot to demonstrate the effectiveness of our approach.
UR - http://www.scopus.com/inward/record.url?scp=85027976998&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85027976998&partnerID=8YFLogxK
U2 - 10.1109/ICRA.2017.7989155
DO - 10.1109/ICRA.2017.7989155
M3 - Conference contribution
AN - SCOPUS:85027976998
T3 - Proceedings - IEEE International Conference on Robotics and Automation
SP - 1313
EP - 1320
BT - ICRA 2017 - IEEE International Conference on Robotics and Automation
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2017 IEEE International Conference on Robotics and Automation, ICRA 2017
Y2 - 29 May 2017 through 3 June 2017
ER -