TY - GEN
T1 - (When) can ai bots lie?
AU - Chakraborti, Tathagata
AU - Kambhampati, Subbarao
N1 - Funding Information:
Majority of the work was completed while the first author was a PhD student at Arizona State University. This research is supported in part by the AFOSR grant FA9550-18-1-0067,the ONR grants N00014161-2892, N00014-13-1-0176, N00014-13-1-0519,N00014-15-1-2027, and the NASA grant NNX17AD06G.
Publisher Copyright:
© 2019 Association for Computing Machinery.
PY - 2019/1/27
Y1 - 2019/1/27
N2 - The ability of an AI agent to build mental models can open up pathways for manipulating and exploiting the human in the hopes of achieving some greater good. In fact, such behavior does not necessarily require any malicious intent but can rather be borne out of cooperative scenarios. It is also beyond the scope of misinterpretation of intents, as in the case of value alignment problems, and thus can be effectively engineered if desired (i.e. algorithms exist that can optimize such behavior not because models were misspecified but because they were misused). Such techniques pose several unresolved ethical and moral questions with regards to the design of autonomy. In this paper, we illustrate some of these issues in a teaming scenario and investigate how they are perceived by participants in a thought experiment. Finally, we end with a discussion on the moral implications of such behavior from the perspective of the doctor-patient relationship.
AB - The ability of an AI agent to build mental models can open up pathways for manipulating and exploiting the human in the hopes of achieving some greater good. In fact, such behavior does not necessarily require any malicious intent but can rather be borne out of cooperative scenarios. It is also beyond the scope of misinterpretation of intents, as in the case of value alignment problems, and thus can be effectively engineered if desired (i.e. algorithms exist that can optimize such behavior not because models were misspecified but because they were misused). Such techniques pose several unresolved ethical and moral questions with regards to the design of autonomy. In this paper, we illustrate some of these issues in a teaming scenario and investigate how they are perceived by participants in a thought experiment. Finally, we end with a discussion on the moral implications of such behavior from the perspective of the doctor-patient relationship.
KW - Automated Planning
KW - Hippocratic Decorum
KW - Human-Aware AI
KW - Model Reconciliation
KW - Plan Explanations
UR - http://www.scopus.com/inward/record.url?scp=85070622046&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85070622046&partnerID=8YFLogxK
U2 - 10.1145/3306618.3314281
DO - 10.1145/3306618.3314281
M3 - Conference contribution
AN - SCOPUS:85070622046
T3 - AIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society
SP - 53
EP - 59
BT - AIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society
PB - Association for Computing Machinery, Inc
T2 - 2nd AAAI/ACM Conference on AI, Ethics, and Society, AIES 2019
Y2 - 27 January 2019 through 28 January 2019
ER -