Behavior Explanation as Intention Signaling in Human-Robot Teaming

Ze Gong, Yu Zhang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Facilitating a shared team understanding is an important task in human-robot teaming. In order to achieve efficient collaboration between the human and robot, it requires not only the robot to understand what the human is doing, but also the robot's behavior be understood by (a.k.a. explainable to) the human. While most prior work has focused on the first aspect, the latter has also begun to draw significant attention. We propose an approach to explaining robot behavior as intention signaling using natural language sentences. In contrast to recent approaches to generating explicable and legible plans, intention signaling does not require the robot to deviate from its optimal plan; neither does it require humans to update their knowledge as generally required for explanation generation. The key questions to be answered here for intention signaling are the what (content of signaling) and when (timing). Based on our prior work, we formulate human interpreting robot actions as a labeling process to be learned. To capture the dependencies between the interpretation of robot actions that are far apart, skip-chain Conditional Random Fields (CRFs) are used. The answers to the when and what can then be converted to an inference problem in the skip-chain CRFs. Potential timings and content of signaling are explored by fixing the labels of certain actions in the CRF model; the configuration that maximizes the underlying probability of being able to associate a label with the remaining actions, which reflects the human's understanding of the robot's plan, is returned for signaling. For evaluation, we construct a synthetic domain to verify that intention signaling can help achieve better teaming by reducing criticism on robot behavior that may appear undesirable but is otherwise required, e.g., due to information asymmetry that results in misinterpretation. We use Amazon Mechanical Turk (MTurk) to assess robot behavior with two settings (i.e., with and without signaling). Results show that our approach achieves the desired effect of creating more explainable robot behavior.

Original languageEnglish (US)
Title of host publicationRO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1005-1011
Number of pages7
ISBN (Electronic)9781538679807
DOIs
StatePublished - Nov 6 2018
Event27th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2018 - Nanjing, China
Duration: Aug 27 2018Aug 31 2018

Publication series

NameRO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication

Conference

Conference27th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2018
CountryChina
CityNanjing
Period8/27/188/31/18

Fingerprint

robot
Robots
Labels
Language
Turk
asymmetry
Labeling
criticism

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Cognitive Neuroscience
  • Communication
  • Artificial Intelligence

Cite this

Gong, Z., & Zhang, Y. (2018). Behavior Explanation as Intention Signaling in Human-Robot Teaming. In RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication (pp. 1005-1011). [8525675] (RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ROMAN.2018.8525675

Behavior Explanation as Intention Signaling in Human-Robot Teaming. / Gong, Ze; Zhang, Yu.

RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication. Institute of Electrical and Electronics Engineers Inc., 2018. p. 1005-1011 8525675 (RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Gong, Z & Zhang, Y 2018, Behavior Explanation as Intention Signaling in Human-Robot Teaming. in RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication., 8525675, RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication, Institute of Electrical and Electronics Engineers Inc., pp. 1005-1011, 27th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2018, Nanjing, China, 8/27/18. https://doi.org/10.1109/ROMAN.2018.8525675
Gong Z, Zhang Y. Behavior Explanation as Intention Signaling in Human-Robot Teaming. In RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication. Institute of Electrical and Electronics Engineers Inc. 2018. p. 1005-1011. 8525675. (RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication). https://doi.org/10.1109/ROMAN.2018.8525675
Gong, Ze ; Zhang, Yu. / Behavior Explanation as Intention Signaling in Human-Robot Teaming. RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication. Institute of Electrical and Electronics Engineers Inc., 2018. pp. 1005-1011 (RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication).
@inproceedings{e681c0611078424faaf0cf8f53a3be76,
title = "Behavior Explanation as Intention Signaling in Human-Robot Teaming",
abstract = "Facilitating a shared team understanding is an important task in human-robot teaming. In order to achieve efficient collaboration between the human and robot, it requires not only the robot to understand what the human is doing, but also the robot's behavior be understood by (a.k.a. explainable to) the human. While most prior work has focused on the first aspect, the latter has also begun to draw significant attention. We propose an approach to explaining robot behavior as intention signaling using natural language sentences. In contrast to recent approaches to generating explicable and legible plans, intention signaling does not require the robot to deviate from its optimal plan; neither does it require humans to update their knowledge as generally required for explanation generation. The key questions to be answered here for intention signaling are the what (content of signaling) and when (timing). Based on our prior work, we formulate human interpreting robot actions as a labeling process to be learned. To capture the dependencies between the interpretation of robot actions that are far apart, skip-chain Conditional Random Fields (CRFs) are used. The answers to the when and what can then be converted to an inference problem in the skip-chain CRFs. Potential timings and content of signaling are explored by fixing the labels of certain actions in the CRF model; the configuration that maximizes the underlying probability of being able to associate a label with the remaining actions, which reflects the human's understanding of the robot's plan, is returned for signaling. For evaluation, we construct a synthetic domain to verify that intention signaling can help achieve better teaming by reducing criticism on robot behavior that may appear undesirable but is otherwise required, e.g., due to information asymmetry that results in misinterpretation. We use Amazon Mechanical Turk (MTurk) to assess robot behavior with two settings (i.e., with and without signaling). Results show that our approach achieves the desired effect of creating more explainable robot behavior.",
author = "Ze Gong and Yu Zhang",
year = "2018",
month = "11",
day = "6",
doi = "10.1109/ROMAN.2018.8525675",
language = "English (US)",
series = "RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "1005--1011",
booktitle = "RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication",

}

TY - GEN

T1 - Behavior Explanation as Intention Signaling in Human-Robot Teaming

AU - Gong, Ze

AU - Zhang, Yu

PY - 2018/11/6

Y1 - 2018/11/6

N2 - Facilitating a shared team understanding is an important task in human-robot teaming. In order to achieve efficient collaboration between the human and robot, it requires not only the robot to understand what the human is doing, but also the robot's behavior be understood by (a.k.a. explainable to) the human. While most prior work has focused on the first aspect, the latter has also begun to draw significant attention. We propose an approach to explaining robot behavior as intention signaling using natural language sentences. In contrast to recent approaches to generating explicable and legible plans, intention signaling does not require the robot to deviate from its optimal plan; neither does it require humans to update their knowledge as generally required for explanation generation. The key questions to be answered here for intention signaling are the what (content of signaling) and when (timing). Based on our prior work, we formulate human interpreting robot actions as a labeling process to be learned. To capture the dependencies between the interpretation of robot actions that are far apart, skip-chain Conditional Random Fields (CRFs) are used. The answers to the when and what can then be converted to an inference problem in the skip-chain CRFs. Potential timings and content of signaling are explored by fixing the labels of certain actions in the CRF model; the configuration that maximizes the underlying probability of being able to associate a label with the remaining actions, which reflects the human's understanding of the robot's plan, is returned for signaling. For evaluation, we construct a synthetic domain to verify that intention signaling can help achieve better teaming by reducing criticism on robot behavior that may appear undesirable but is otherwise required, e.g., due to information asymmetry that results in misinterpretation. We use Amazon Mechanical Turk (MTurk) to assess robot behavior with two settings (i.e., with and without signaling). Results show that our approach achieves the desired effect of creating more explainable robot behavior.

AB - Facilitating a shared team understanding is an important task in human-robot teaming. In order to achieve efficient collaboration between the human and robot, it requires not only the robot to understand what the human is doing, but also the robot's behavior be understood by (a.k.a. explainable to) the human. While most prior work has focused on the first aspect, the latter has also begun to draw significant attention. We propose an approach to explaining robot behavior as intention signaling using natural language sentences. In contrast to recent approaches to generating explicable and legible plans, intention signaling does not require the robot to deviate from its optimal plan; neither does it require humans to update their knowledge as generally required for explanation generation. The key questions to be answered here for intention signaling are the what (content of signaling) and when (timing). Based on our prior work, we formulate human interpreting robot actions as a labeling process to be learned. To capture the dependencies between the interpretation of robot actions that are far apart, skip-chain Conditional Random Fields (CRFs) are used. The answers to the when and what can then be converted to an inference problem in the skip-chain CRFs. Potential timings and content of signaling are explored by fixing the labels of certain actions in the CRF model; the configuration that maximizes the underlying probability of being able to associate a label with the remaining actions, which reflects the human's understanding of the robot's plan, is returned for signaling. For evaluation, we construct a synthetic domain to verify that intention signaling can help achieve better teaming by reducing criticism on robot behavior that may appear undesirable but is otherwise required, e.g., due to information asymmetry that results in misinterpretation. We use Amazon Mechanical Turk (MTurk) to assess robot behavior with two settings (i.e., with and without signaling). Results show that our approach achieves the desired effect of creating more explainable robot behavior.

UR - http://www.scopus.com/inward/record.url?scp=85058104611&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85058104611&partnerID=8YFLogxK

U2 - 10.1109/ROMAN.2018.8525675

DO - 10.1109/ROMAN.2018.8525675

M3 - Conference contribution

AN - SCOPUS:85058104611

T3 - RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication

SP - 1005

EP - 1011

BT - RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication

PB - Institute of Electrical and Electronics Engineers Inc.

ER -