Handling model uncertainty and multiplicity in explanations via model reconciliation

Sarath Sreedharan, Tathagata Chakraborti, Subbarao Kambhampati

Research output: Contribution to journalConference article

5 Citations (Scopus)

Abstract

Model reconciliation has been proposed as a way for an agent to explain its decisions to a human who may have a different understanding of the same planning problem by explaining its decisions in terms of these model differences. However, often the human’s mental model (and hence the difference) is not known precisely and such explanations cannot be readily computed. In this paper, we show how the explanation generation process evolves in the presence of such model uncertainty or incompleteness by generating conformant explanations that are applicable to a set of possible models. We also show how such explanations can contain superfluous information and how such redundancies can be reduced using conditional explanations to iterate with the human to attain common ground. Finally, we will introduce an anytime version of this approach and empirically demonstrate the trade-offs involved in the different forms of explanations in terms of the computational overhead for the agent and the communication overhead for the human. We illustrate these concepts in three well-known planning domains as well as in a demonstration on a robot involved in a typical search and reconnaissance scenario with an external human supervisor.

Original languageEnglish (US)
Pages (from-to)518-526
Number of pages9
JournalProceedings International Conference on Automated Planning and Scheduling, ICAPS
Volume2018-June
StatePublished - Jan 1 2018
Event28th International Conference on Automated Planning and Scheduling, ICAPS 2018 - Delft, Netherlands
Duration: Jun 24 2018Jun 29 2018

Fingerprint

Planning
Supervisory personnel
Redundancy
Demonstrations
Uncertainty
Model uncertainty
Reconciliation
Multiplicity
Robots
Communication
Incompleteness
Supervisors
Scenarios
Mental models
Trade-offs
Robot

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Science Applications
  • Information Systems and Management

Cite this

Handling model uncertainty and multiplicity in explanations via model reconciliation. / Sreedharan, Sarath; Chakraborti, Tathagata; Kambhampati, Subbarao.

In: Proceedings International Conference on Automated Planning and Scheduling, ICAPS, Vol. 2018-June, 01.01.2018, p. 518-526.

Research output: Contribution to journalConference article

@article{0fd560eb7af44547bbdad798e18dc905,
title = "Handling model uncertainty and multiplicity in explanations via model reconciliation",
abstract = "Model reconciliation has been proposed as a way for an agent to explain its decisions to a human who may have a different understanding of the same planning problem by explaining its decisions in terms of these model differences. However, often the human’s mental model (and hence the difference) is not known precisely and such explanations cannot be readily computed. In this paper, we show how the explanation generation process evolves in the presence of such model uncertainty or incompleteness by generating conformant explanations that are applicable to a set of possible models. We also show how such explanations can contain superfluous information and how such redundancies can be reduced using conditional explanations to iterate with the human to attain common ground. Finally, we will introduce an anytime version of this approach and empirically demonstrate the trade-offs involved in the different forms of explanations in terms of the computational overhead for the agent and the communication overhead for the human. We illustrate these concepts in three well-known planning domains as well as in a demonstration on a robot involved in a typical search and reconnaissance scenario with an external human supervisor.",
author = "Sarath Sreedharan and Tathagata Chakraborti and Subbarao Kambhampati",
year = "2018",
month = "1",
day = "1",
language = "English (US)",
volume = "2018-June",
pages = "518--526",
journal = "Proceedings International Conference on Automated Planning and Scheduling, ICAPS",
issn = "2334-0835",

}

TY - JOUR

T1 - Handling model uncertainty and multiplicity in explanations via model reconciliation

AU - Sreedharan, Sarath

AU - Chakraborti, Tathagata

AU - Kambhampati, Subbarao

PY - 2018/1/1

Y1 - 2018/1/1

N2 - Model reconciliation has been proposed as a way for an agent to explain its decisions to a human who may have a different understanding of the same planning problem by explaining its decisions in terms of these model differences. However, often the human’s mental model (and hence the difference) is not known precisely and such explanations cannot be readily computed. In this paper, we show how the explanation generation process evolves in the presence of such model uncertainty or incompleteness by generating conformant explanations that are applicable to a set of possible models. We also show how such explanations can contain superfluous information and how such redundancies can be reduced using conditional explanations to iterate with the human to attain common ground. Finally, we will introduce an anytime version of this approach and empirically demonstrate the trade-offs involved in the different forms of explanations in terms of the computational overhead for the agent and the communication overhead for the human. We illustrate these concepts in three well-known planning domains as well as in a demonstration on a robot involved in a typical search and reconnaissance scenario with an external human supervisor.

AB - Model reconciliation has been proposed as a way for an agent to explain its decisions to a human who may have a different understanding of the same planning problem by explaining its decisions in terms of these model differences. However, often the human’s mental model (and hence the difference) is not known precisely and such explanations cannot be readily computed. In this paper, we show how the explanation generation process evolves in the presence of such model uncertainty or incompleteness by generating conformant explanations that are applicable to a set of possible models. We also show how such explanations can contain superfluous information and how such redundancies can be reduced using conditional explanations to iterate with the human to attain common ground. Finally, we will introduce an anytime version of this approach and empirically demonstrate the trade-offs involved in the different forms of explanations in terms of the computational overhead for the agent and the communication overhead for the human. We illustrate these concepts in three well-known planning domains as well as in a demonstration on a robot involved in a typical search and reconnaissance scenario with an external human supervisor.

UR - http://www.scopus.com/inward/record.url?scp=85054768722&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85054768722&partnerID=8YFLogxK

M3 - Conference article

VL - 2018-June

SP - 518

EP - 526

JO - Proceedings International Conference on Automated Planning and Scheduling, ICAPS

JF - Proceedings International Conference on Automated Planning and Scheduling, ICAPS

SN - 2334-0835

ER -