Distributed dynamic team trust in human, artificial intelligence, and robot teaming

Research output: Chapter in Book/Report/Conference proceedingChapter (peer-reviewed)peer-review

Abstract

Any functional human-AI-robot team consists of multiple stakeholders, as well as one or more artificial agents (e.g., AI agents and embodied robotic agents). Each stakeholder's trust in the artificial agent matters because it not only impacts their performance on tasks with human teammates and artificial agents but also influences their trust in other stakeholders and how other stakeholders trust the artificial agents. Interpersonal trust and human-agent trust mutually influence each other. Traditional measures of trust in human-robot interactions have been focused on one end-user’s trust in one artificial agent rather than investigating the team level of trust that involves all relevant stakeholders and the interactions among these entities. Also, traditional measures of trust have been mostly static, unable to capture the distributed trust dynamics at a team level. To fill this gap, this chapter proposes a distributed dynamic team trust (D2T2) framework and potential measures for its applications in human-AI-robot teaming.
Original languageEnglish (US)
Title of host publicationTrust in Human-Robot Interaction
EditorsChang S. Nam, Joseph B. Lyons
PublisherAcademic Press
Chapter13
Pages301-319
DOIs
StateE-pub ahead of print - Nov 20 2020

Fingerprint

Dive into the research topics of 'Distributed dynamic team trust in human, artificial intelligence, and robot teaming'. Together they form a unique fingerprint.

Cite this