(When) can ai bots lie?

Tathagata Chakraborti, Subbarao Kambhampati

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The ability of an AI agent to build mental models can open up pathways for manipulating and exploiting the human in the hopes of achieving some greater good. In fact, such behavior does not necessarily require any malicious intent but can rather be borne out of cooperative scenarios. It is also beyond the scope of misinterpretation of intents, as in the case of value alignment problems, and thus can be effectively engineered if desired (i.e. algorithms exist that can optimize such behavior not because models were misspecified but because they were misused). Such techniques pose several unresolved ethical and moral questions with regards to the design of autonomy. In this paper, we illustrate some of these issues in a teaming scenario and investigate how they are perceived by participants in a thought experiment. Finally, we end with a discussion on the moral implications of such behavior from the perspective of the doctor-patient relationship.

Original languageEnglish (US)
Title of host publicationAIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society
PublisherAssociation for Computing Machinery, Inc
Pages53-59
Number of pages7
ISBN (Electronic)9781450363242
DOIs
StatePublished - Jan 27 2019
Event2nd AAAI/ACM Conference on AI, Ethics, and Society, AIES 2019 - Honolulu, United States
Duration: Jan 27 2019Jan 28 2019

Publication series

NameAIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society

Conference

Conference2nd AAAI/ACM Conference on AI, Ethics, and Society, AIES 2019
CountryUnited States
CityHonolulu
Period1/27/191/28/19

Fingerprint

Experiments

Keywords

  • Automated Planning
  • Hippocratic Decorum
  • Human-Aware AI
  • Model Reconciliation
  • Plan Explanations

ASJC Scopus subject areas

  • Artificial Intelligence

Cite this

Chakraborti, T., & Kambhampati, S. (2019). (When) can ai bots lie? In AIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (pp. 53-59). (AIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society). Association for Computing Machinery, Inc. https://doi.org/10.1145/3306618.3314281

(When) can ai bots lie? / Chakraborti, Tathagata; Kambhampati, Subbarao.

AIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery, Inc, 2019. p. 53-59 (AIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Chakraborti, T & Kambhampati, S 2019, (When) can ai bots lie? in AIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. AIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, Association for Computing Machinery, Inc, pp. 53-59, 2nd AAAI/ACM Conference on AI, Ethics, and Society, AIES 2019, Honolulu, United States, 1/27/19. https://doi.org/10.1145/3306618.3314281
Chakraborti T, Kambhampati S. (When) can ai bots lie? In AIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery, Inc. 2019. p. 53-59. (AIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society). https://doi.org/10.1145/3306618.3314281
Chakraborti, Tathagata ; Kambhampati, Subbarao. / (When) can ai bots lie?. AIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery, Inc, 2019. pp. 53-59 (AIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society).
@inproceedings{f882405e903f4c9d9ce32760676dbc41,
title = "(When) can ai bots lie?",
abstract = "The ability of an AI agent to build mental models can open up pathways for manipulating and exploiting the human in the hopes of achieving some greater good. In fact, such behavior does not necessarily require any malicious intent but can rather be borne out of cooperative scenarios. It is also beyond the scope of misinterpretation of intents, as in the case of value alignment problems, and thus can be effectively engineered if desired (i.e. algorithms exist that can optimize such behavior not because models were misspecified but because they were misused). Such techniques pose several unresolved ethical and moral questions with regards to the design of autonomy. In this paper, we illustrate some of these issues in a teaming scenario and investigate how they are perceived by participants in a thought experiment. Finally, we end with a discussion on the moral implications of such behavior from the perspective of the doctor-patient relationship.",
keywords = "Automated Planning, Hippocratic Decorum, Human-Aware AI, Model Reconciliation, Plan Explanations",
author = "Tathagata Chakraborti and Subbarao Kambhampati",
year = "2019",
month = "1",
day = "27",
doi = "10.1145/3306618.3314281",
language = "English (US)",
series = "AIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society",
publisher = "Association for Computing Machinery, Inc",
pages = "53--59",
booktitle = "AIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society",

}

TY - GEN

T1 - (When) can ai bots lie?

AU - Chakraborti, Tathagata

AU - Kambhampati, Subbarao

PY - 2019/1/27

Y1 - 2019/1/27

N2 - The ability of an AI agent to build mental models can open up pathways for manipulating and exploiting the human in the hopes of achieving some greater good. In fact, such behavior does not necessarily require any malicious intent but can rather be borne out of cooperative scenarios. It is also beyond the scope of misinterpretation of intents, as in the case of value alignment problems, and thus can be effectively engineered if desired (i.e. algorithms exist that can optimize such behavior not because models were misspecified but because they were misused). Such techniques pose several unresolved ethical and moral questions with regards to the design of autonomy. In this paper, we illustrate some of these issues in a teaming scenario and investigate how they are perceived by participants in a thought experiment. Finally, we end with a discussion on the moral implications of such behavior from the perspective of the doctor-patient relationship.

AB - The ability of an AI agent to build mental models can open up pathways for manipulating and exploiting the human in the hopes of achieving some greater good. In fact, such behavior does not necessarily require any malicious intent but can rather be borne out of cooperative scenarios. It is also beyond the scope of misinterpretation of intents, as in the case of value alignment problems, and thus can be effectively engineered if desired (i.e. algorithms exist that can optimize such behavior not because models were misspecified but because they were misused). Such techniques pose several unresolved ethical and moral questions with regards to the design of autonomy. In this paper, we illustrate some of these issues in a teaming scenario and investigate how they are perceived by participants in a thought experiment. Finally, we end with a discussion on the moral implications of such behavior from the perspective of the doctor-patient relationship.

KW - Automated Planning

KW - Hippocratic Decorum

KW - Human-Aware AI

KW - Model Reconciliation

KW - Plan Explanations

UR - http://www.scopus.com/inward/record.url?scp=85070622046&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85070622046&partnerID=8YFLogxK

U2 - 10.1145/3306618.3314281

DO - 10.1145/3306618.3314281

M3 - Conference contribution

T3 - AIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society

SP - 53

EP - 59

BT - AIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society

PB - Association for Computing Machinery, Inc

ER -