TY - CHAP
T1 - Explanation as Model Reconciliation
AU - Sreedharan, Sarath
AU - Kulkarni, Anagha
AU - Kambhampati, Subbarao
N1 - Publisher Copyright:
© 2022, Springer Nature Switzerland AG.
PY - 2022
Y1 - 2022
N2 - In this chapter, we revisit the explicability score and investigate an alternate strategy to improve the explicability of the robot behavior, namely explanations. Rather than force the robot to choose behaviors that are inherently explicable in the human model, here we will let the robot choose a behavior optimal in its model and use communication to address the central reason why the human is confused about the behavior in the first place, i.e., the model difference. That is, the robot will help the human understand why the behavior was performed, by choosing to reveal parts of its model that were previously unknown to the human. This would allow us to overcome one of the main shortcomings of the plan generation methods discussed in Chapter 2, namely that there might not exist a plan in the robot model that has a high explicability score. In this scenario, the explicability score of the plan is only limited by the agent’s ability to effectively explain it. In this chapter, in addition to introducing the basic framework of explanation as model reconciliation under a certain set of assumptions, we will also look at several types of model reconciliation explanations, study some of their properties, and consider some simple approximations. In the coming chapters, we will further extend the idea of explanations and look at ways of relaxing some of the assumptions made in this chapter.
AB - In this chapter, we revisit the explicability score and investigate an alternate strategy to improve the explicability of the robot behavior, namely explanations. Rather than force the robot to choose behaviors that are inherently explicable in the human model, here we will let the robot choose a behavior optimal in its model and use communication to address the central reason why the human is confused about the behavior in the first place, i.e., the model difference. That is, the robot will help the human understand why the behavior was performed, by choosing to reveal parts of its model that were previously unknown to the human. This would allow us to overcome one of the main shortcomings of the plan generation methods discussed in Chapter 2, namely that there might not exist a plan in the robot model that has a high explicability score. In this scenario, the explicability score of the plan is only limited by the agent’s ability to effectively explain it. In this chapter, in addition to introducing the basic framework of explanation as model reconciliation under a certain set of assumptions, we will also look at several types of model reconciliation explanations, study some of their properties, and consider some simple approximations. In the coming chapters, we will further extend the idea of explanations and look at ways of relaxing some of the assumptions made in this chapter.
UR - http://www.scopus.com/inward/record.url?scp=85139522165&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85139522165&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-03767-2_5
DO - 10.1007/978-3-031-03767-2_5
M3 - Chapter
AN - SCOPUS:85139522165
T3 - Synthesis Lectures on Artificial Intelligence and Machine Learning
SP - 59
EP - 80
BT - Synthesis Lectures on Artificial Intelligence and Machine Learning
PB - Springer Nature
ER -