Any functional human-AI-robot team consists of multiple stakeholders, as well as one or more artificial agents (e.g., AI agents and embodied robotic agents). Each stakeholder's trust in the artificial agent matters because it not only impacts their performance on tasks with human teammates and artificial agents but also influences their trust in other stakeholders and how other stakeholders trust the artificial agents. Interpersonal trust and human-agent trust mutually influence each other. Traditional measures of trust in human-robot interactions have been focused on one end-user’s trust in one artificial agent rather than investigating the team level of trust that involves all relevant stakeholders and the interactions among these entities. Also, traditional measures of trust have been mostly static, unable to capture the distributed trust dynamics at a team level. To fill this gap, this chapter proposes a distributed dynamic team trust (D2T2) framework and potential measures for its applications in human-AI-robot teaming.
|Original language||English (US)|
|Title of host publication||Trust in Human-Robot Interaction|
|Editors||Chang S. Nam, Joseph B. Lyons|
|State||E-pub ahead of print - Nov 20 2020|