Interactive Plan Explicability in Human-Robot Teaming

Mehrdad Zakershahrak, Akshay Sonawane, Ze Gong, Yu Zhang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

Human-robot teaming is one of the most important applications of artificial intelligence in the fast-growing field of robotics. For effective teaming, a robot must not only maintain a behavioral model of its human teammates to project the team status, but also be aware of its human teammates' expectation of itself. Being aware of the human teammates' expectation leads to robot behaviors that better align with the human expectation, thus facilitating more efficient and potentially safer teams. Our work addresses the problem of human-robot interaction with the consideration of such teammate models in sequential domains by leveraging the concept of plan explicability. In plan explicability, however, the human is considered solely as an observer. In this paper, we extend plan explicability to consider interactive settings where the human and robot's behaviors can influence each other. We term this new measure Interactive Plan Explicability (IPE). We compare the joint plan generated by our approach with the consideration of this measure using the fast forward (FF) planner, with the plan generated by FF without such consideration, as well as with the plan created with human subjects interacting with a robot running an FF planner. Since the human subject is expected to adapt to the robot's behavior dynamically when it deviates from her expectation, the plan created with human subjects is expected to be more explicable than the FF plan, and comparable to the explicable plan generated by our approach. Results indicate that the explicability score of plans generated by our algorithm is indeed closer to the human interactive plan than the plan generated by FF, implying that the plans generated by our algorithms align better with the expected plans of the human during execution. This can lead to more efficient collaboration in practice.

Original languageEnglish (US)
Title of host publicationRO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1012-1017
Number of pages6
ISBN (Electronic)9781538679807
DOIs
StatePublished - Nov 6 2018
Event27th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2018 - Nanjing, China
Duration: Aug 27 2018Aug 31 2018

Publication series

NameRO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication

Conference

Conference27th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2018
CountryChina
CityNanjing
Period8/27/188/31/18

Fingerprint

robot
Robots
Human robot interaction
Artificial intelligence
Robotics
Artificial Intelligence
Running
artificial intelligence

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Cognitive Neuroscience
  • Communication
  • Artificial Intelligence

Cite this

Zakershahrak, M., Sonawane, A., Gong, Z., & Zhang, Y. (2018). Interactive Plan Explicability in Human-Robot Teaming. In RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication (pp. 1012-1017). [8525540] (RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ROMAN.2018.8525540

Interactive Plan Explicability in Human-Robot Teaming. / Zakershahrak, Mehrdad; Sonawane, Akshay; Gong, Ze; Zhang, Yu.

RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication. Institute of Electrical and Electronics Engineers Inc., 2018. p. 1012-1017 8525540 (RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Zakershahrak, M, Sonawane, A, Gong, Z & Zhang, Y 2018, Interactive Plan Explicability in Human-Robot Teaming. in RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication., 8525540, RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication, Institute of Electrical and Electronics Engineers Inc., pp. 1012-1017, 27th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2018, Nanjing, China, 8/27/18. https://doi.org/10.1109/ROMAN.2018.8525540
Zakershahrak M, Sonawane A, Gong Z, Zhang Y. Interactive Plan Explicability in Human-Robot Teaming. In RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication. Institute of Electrical and Electronics Engineers Inc. 2018. p. 1012-1017. 8525540. (RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication). https://doi.org/10.1109/ROMAN.2018.8525540
Zakershahrak, Mehrdad ; Sonawane, Akshay ; Gong, Ze ; Zhang, Yu. / Interactive Plan Explicability in Human-Robot Teaming. RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication. Institute of Electrical and Electronics Engineers Inc., 2018. pp. 1012-1017 (RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication).
@inproceedings{770e0f11e00c4998a8429513718a466f,
title = "Interactive Plan Explicability in Human-Robot Teaming",
abstract = "Human-robot teaming is one of the most important applications of artificial intelligence in the fast-growing field of robotics. For effective teaming, a robot must not only maintain a behavioral model of its human teammates to project the team status, but also be aware of its human teammates' expectation of itself. Being aware of the human teammates' expectation leads to robot behaviors that better align with the human expectation, thus facilitating more efficient and potentially safer teams. Our work addresses the problem of human-robot interaction with the consideration of such teammate models in sequential domains by leveraging the concept of plan explicability. In plan explicability, however, the human is considered solely as an observer. In this paper, we extend plan explicability to consider interactive settings where the human and robot's behaviors can influence each other. We term this new measure Interactive Plan Explicability (IPE). We compare the joint plan generated by our approach with the consideration of this measure using the fast forward (FF) planner, with the plan generated by FF without such consideration, as well as with the plan created with human subjects interacting with a robot running an FF planner. Since the human subject is expected to adapt to the robot's behavior dynamically when it deviates from her expectation, the plan created with human subjects is expected to be more explicable than the FF plan, and comparable to the explicable plan generated by our approach. Results indicate that the explicability score of plans generated by our algorithm is indeed closer to the human interactive plan than the plan generated by FF, implying that the plans generated by our algorithms align better with the expected plans of the human during execution. This can lead to more efficient collaboration in practice.",
author = "Mehrdad Zakershahrak and Akshay Sonawane and Ze Gong and Yu Zhang",
year = "2018",
month = "11",
day = "6",
doi = "10.1109/ROMAN.2018.8525540",
language = "English (US)",
series = "RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "1012--1017",
booktitle = "RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication",

}

TY - GEN

T1 - Interactive Plan Explicability in Human-Robot Teaming

AU - Zakershahrak, Mehrdad

AU - Sonawane, Akshay

AU - Gong, Ze

AU - Zhang, Yu

PY - 2018/11/6

Y1 - 2018/11/6

N2 - Human-robot teaming is one of the most important applications of artificial intelligence in the fast-growing field of robotics. For effective teaming, a robot must not only maintain a behavioral model of its human teammates to project the team status, but also be aware of its human teammates' expectation of itself. Being aware of the human teammates' expectation leads to robot behaviors that better align with the human expectation, thus facilitating more efficient and potentially safer teams. Our work addresses the problem of human-robot interaction with the consideration of such teammate models in sequential domains by leveraging the concept of plan explicability. In plan explicability, however, the human is considered solely as an observer. In this paper, we extend plan explicability to consider interactive settings where the human and robot's behaviors can influence each other. We term this new measure Interactive Plan Explicability (IPE). We compare the joint plan generated by our approach with the consideration of this measure using the fast forward (FF) planner, with the plan generated by FF without such consideration, as well as with the plan created with human subjects interacting with a robot running an FF planner. Since the human subject is expected to adapt to the robot's behavior dynamically when it deviates from her expectation, the plan created with human subjects is expected to be more explicable than the FF plan, and comparable to the explicable plan generated by our approach. Results indicate that the explicability score of plans generated by our algorithm is indeed closer to the human interactive plan than the plan generated by FF, implying that the plans generated by our algorithms align better with the expected plans of the human during execution. This can lead to more efficient collaboration in practice.

AB - Human-robot teaming is one of the most important applications of artificial intelligence in the fast-growing field of robotics. For effective teaming, a robot must not only maintain a behavioral model of its human teammates to project the team status, but also be aware of its human teammates' expectation of itself. Being aware of the human teammates' expectation leads to robot behaviors that better align with the human expectation, thus facilitating more efficient and potentially safer teams. Our work addresses the problem of human-robot interaction with the consideration of such teammate models in sequential domains by leveraging the concept of plan explicability. In plan explicability, however, the human is considered solely as an observer. In this paper, we extend plan explicability to consider interactive settings where the human and robot's behaviors can influence each other. We term this new measure Interactive Plan Explicability (IPE). We compare the joint plan generated by our approach with the consideration of this measure using the fast forward (FF) planner, with the plan generated by FF without such consideration, as well as with the plan created with human subjects interacting with a robot running an FF planner. Since the human subject is expected to adapt to the robot's behavior dynamically when it deviates from her expectation, the plan created with human subjects is expected to be more explicable than the FF plan, and comparable to the explicable plan generated by our approach. Results indicate that the explicability score of plans generated by our algorithm is indeed closer to the human interactive plan than the plan generated by FF, implying that the plans generated by our algorithms align better with the expected plans of the human during execution. This can lead to more efficient collaboration in practice.

UR - http://www.scopus.com/inward/record.url?scp=85058136308&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85058136308&partnerID=8YFLogxK

U2 - 10.1109/ROMAN.2018.8525540

DO - 10.1109/ROMAN.2018.8525540

M3 - Conference contribution

AN - SCOPUS:85058136308

T3 - RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication

SP - 1012

EP - 1017

BT - RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication

PB - Institute of Electrical and Electronics Engineers Inc.

ER -