Behavior Explanation as Intention Signaling in Human-Robot Teaming

Ze Gong, Yu Zhang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Scopus citations

Abstract

Facilitating a shared team understanding is an important task in human-robot teaming. In order to achieve efficient collaboration between the human and robot, it requires not only the robot to understand what the human is doing, but also the robot's behavior be understood by (a.k.a. explainable to) the human. While most prior work has focused on the first aspect, the latter has also begun to draw significant attention. We propose an approach to explaining robot behavior as intention signaling using natural language sentences. In contrast to recent approaches to generating explicable and legible plans, intention signaling does not require the robot to deviate from its optimal plan; neither does it require humans to update their knowledge as generally required for explanation generation. The key questions to be answered here for intention signaling are the what (content of signaling) and when (timing). Based on our prior work, we formulate human interpreting robot actions as a labeling process to be learned. To capture the dependencies between the interpretation of robot actions that are far apart, skip-chain Conditional Random Fields (CRFs) are used. The answers to the when and what can then be converted to an inference problem in the skip-chain CRFs. Potential timings and content of signaling are explored by fixing the labels of certain actions in the CRF model; the configuration that maximizes the underlying probability of being able to associate a label with the remaining actions, which reflects the human's understanding of the robot's plan, is returned for signaling. For evaluation, we construct a synthetic domain to verify that intention signaling can help achieve better teaming by reducing criticism on robot behavior that may appear undesirable but is otherwise required, e.g., due to information asymmetry that results in misinterpretation. We use Amazon Mechanical Turk (MTurk) to assess robot behavior with two settings (i.e., with and without signaling). Results show that our approach achieves the desired effect of creating more explainable robot behavior.

Original languageEnglish (US)
Title of host publicationRO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1005-1011
Number of pages7
ISBN (Electronic)9781538679807
DOIs
StatePublished - Nov 6 2018
Event27th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2018 - Nanjing, China
Duration: Aug 27 2018Aug 31 2018

Publication series

NameRO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication

Conference

Conference27th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2018
CountryChina
CityNanjing
Period8/27/188/31/18

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Cognitive Neuroscience
  • Communication
  • Artificial Intelligence

Fingerprint Dive into the research topics of 'Behavior Explanation as Intention Signaling in Human-Robot Teaming'. Together they form a unique fingerprint.

  • Cite this

    Gong, Z., & Zhang, Y. (2018). Behavior Explanation as Intention Signaling in Human-Robot Teaming. In RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication (pp. 1005-1011). [8525675] (RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ROMAN.2018.8525675