Abstract
The ability of an AI agent to build mental models can open up pathways for manipulating and exploiting the human in the hopes of achieving some greater good. In fact, such behavior does not necessarily require any malicious intent but can rather be borne out of cooperative scenarios. It is also beyond the scope of misinterpretation of intents, as in the case of value alignment problems, and thus can be effectively engineered if desired (i.e. algorithms exist that can optimize such behavior not because models were misspecified but because they were misused). Such techniques pose several unresolved ethical and moral questions with regards to the design of autonomy. In this paper, we illustrate some of these issues in a teaming scenario and investigate how they are perceived by participants in a thought experiment. Finally, we end with a discussion on the moral implications of such behavior from the perspective of the doctor-patient relationship.
Original language | English (US) |
---|---|
Title of host publication | AIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society |
Publisher | Association for Computing Machinery, Inc |
Pages | 53-59 |
Number of pages | 7 |
ISBN (Electronic) | 9781450363242 |
DOIs | |
State | Published - Jan 27 2019 |
Event | 2nd AAAI/ACM Conference on AI, Ethics, and Society, AIES 2019 - Honolulu, United States Duration: Jan 27 2019 → Jan 28 2019 |
Publication series
Name | AIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society |
---|
Conference
Conference | 2nd AAAI/ACM Conference on AI, Ethics, and Society, AIES 2019 |
---|---|
Country | United States |
City | Honolulu |
Period | 1/27/19 → 1/28/19 |
Fingerprint
Keywords
- Automated Planning
- Hippocratic Decorum
- Human-Aware AI
- Model Reconciliation
- Plan Explanations
ASJC Scopus subject areas
- Artificial Intelligence
Cite this
(When) can ai bots lie? / Chakraborti, Tathagata; Kambhampati, Subbarao.
AIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery, Inc, 2019. p. 53-59 (AIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society).Research output: Chapter in Book/Report/Conference proceeding › Conference contribution
}
TY - GEN
T1 - (When) can ai bots lie?
AU - Chakraborti, Tathagata
AU - Kambhampati, Subbarao
PY - 2019/1/27
Y1 - 2019/1/27
N2 - The ability of an AI agent to build mental models can open up pathways for manipulating and exploiting the human in the hopes of achieving some greater good. In fact, such behavior does not necessarily require any malicious intent but can rather be borne out of cooperative scenarios. It is also beyond the scope of misinterpretation of intents, as in the case of value alignment problems, and thus can be effectively engineered if desired (i.e. algorithms exist that can optimize such behavior not because models were misspecified but because they were misused). Such techniques pose several unresolved ethical and moral questions with regards to the design of autonomy. In this paper, we illustrate some of these issues in a teaming scenario and investigate how they are perceived by participants in a thought experiment. Finally, we end with a discussion on the moral implications of such behavior from the perspective of the doctor-patient relationship.
AB - The ability of an AI agent to build mental models can open up pathways for manipulating and exploiting the human in the hopes of achieving some greater good. In fact, such behavior does not necessarily require any malicious intent but can rather be borne out of cooperative scenarios. It is also beyond the scope of misinterpretation of intents, as in the case of value alignment problems, and thus can be effectively engineered if desired (i.e. algorithms exist that can optimize such behavior not because models were misspecified but because they were misused). Such techniques pose several unresolved ethical and moral questions with regards to the design of autonomy. In this paper, we illustrate some of these issues in a teaming scenario and investigate how they are perceived by participants in a thought experiment. Finally, we end with a discussion on the moral implications of such behavior from the perspective of the doctor-patient relationship.
KW - Automated Planning
KW - Hippocratic Decorum
KW - Human-Aware AI
KW - Model Reconciliation
KW - Plan Explanations
UR - http://www.scopus.com/inward/record.url?scp=85070622046&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85070622046&partnerID=8YFLogxK
U2 - 10.1145/3306618.3314281
DO - 10.1145/3306618.3314281
M3 - Conference contribution
AN - SCOPUS:85070622046
T3 - AIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society
SP - 53
EP - 59
BT - AIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society
PB - Association for Computing Machinery, Inc
ER -