Synthesizing explainable behavior for human-AI collaboration

Research output: Chapter in Book/Report/Conference proceedingConference contribution

26 Scopus citations

Abstract

As AI technologies enter our everyday lives at an ever increasing pace, there is a greater need for AI systems to work synergistically with humans. This requires AI systems to exhibit behavior that is explainable to humans. Synthesizing such behavior requires AI systems to reason not only with their own models of the task at hand, but also about the mental models of the human collaborators. Using several case-studies from our ongoing research, I will discuss how such multi-model planning forms the basis for explainable behavior.

Original languageEnglish (US)
Title of host publication18th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2019
PublisherInternational Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS)
Pages1-2
Number of pages2
ISBN (Electronic)9781510892002
StatePublished - 2019
Event18th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2019 - Montreal, Canada
Duration: May 13 2019May 17 2019

Publication series

NameProceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS
Volume1
ISSN (Print)1548-8403
ISSN (Electronic)1558-2914

Conference

Conference18th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2019
Country/TerritoryCanada
CityMontreal
Period5/13/195/17/19

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering

Fingerprint

Dive into the research topics of 'Synthesizing explainable behavior for human-AI collaboration'. Together they form a unique fingerprint.

Cite this