Abstract

In this chapter, we revisit the explicability score and investigate an alternate strategy to improve the explicability of the robot behavior, namely explanations. Rather than force the robot to choose behaviors that are inherently explicable in the human model, here we will let the robot choose a behavior optimal in its model and use communication to address the central reason why the human is confused about the behavior in the first place, i.e., the model difference. That is, the robot will help the human understand why the behavior was performed, by choosing to reveal parts of its model that were previously unknown to the human. This would allow us to overcome one of the main shortcomings of the plan generation methods discussed in Chapter 2, namely that there might not exist a plan in the robot model that has a high explicability score. In this scenario, the explicability score of the plan is only limited by the agent’s ability to effectively explain it. In this chapter, in addition to introducing the basic framework of explanation as model reconciliation under a certain set of assumptions, we will also look at several types of model reconciliation explanations, study some of their properties, and consider some simple approximations. In the coming chapters, we will further extend the idea of explanations and look at ways of relaxing some of the assumptions made in this chapter.

Original languageEnglish (US)
Title of host publicationSynthesis Lectures on Artificial Intelligence and Machine Learning
PublisherSpringer Nature
Pages59-80
Number of pages22
DOIs
StatePublished - 2022

Publication series

NameSynthesis Lectures on Artificial Intelligence and Machine Learning
ISSN (Print)1939-4608
ISSN (Electronic)1939-4616

ASJC Scopus subject areas

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Explanation as Model Reconciliation'. Together they form a unique fingerprint.

Cite this