TY - GEN
T1 - An Interpretable Deep Learning Approach to Understand Health Misinformation Transmission on YouTube
AU - Xie, Jiaheng
AU - Chai, Yidong
AU - Liu, Xiao
N1 - Publisher Copyright:
© 2022 IEEE Computer Society. All rights reserved.
PY - 2022
Y1 - 2022
N2 - Health misinformation on social media devastates physical and mental health, invalidates health gains, and potentially costs lives. Deep learning methods have been deployed to predict the spread of misinformation, but they lack the interpretability due to their blackbox nature. To remedy this gap, this study proposes a novel interpretable deep learning, Generative Adversarial Network based Piecewise Wide and Attention Deep Learning (GAN-PiWAD), to predict health misinformation transmission in social media. GAN-PiWAD captures the interactions among multi-modal data, offers unbiased estimation of the total effect of each feature, and models the dynamic total effect of each feature. Interpretation of GAN-PiWAD indicates video description, negative video content, and channel credibility are key features that drive viral transmission of misinformation. This study contributes to IS with a novel interpretable deep learning that is generalizable to understand human decisions. We provide direct implications to design interventions to identify misinformation, control transmissions, and manage infodemics.
AB - Health misinformation on social media devastates physical and mental health, invalidates health gains, and potentially costs lives. Deep learning methods have been deployed to predict the spread of misinformation, but they lack the interpretability due to their blackbox nature. To remedy this gap, this study proposes a novel interpretable deep learning, Generative Adversarial Network based Piecewise Wide and Attention Deep Learning (GAN-PiWAD), to predict health misinformation transmission in social media. GAN-PiWAD captures the interactions among multi-modal data, offers unbiased estimation of the total effect of each feature, and models the dynamic total effect of each feature. Interpretation of GAN-PiWAD indicates video description, negative video content, and channel credibility are key features that drive viral transmission of misinformation. This study contributes to IS with a novel interpretable deep learning that is generalizable to understand human decisions. We provide direct implications to design interventions to identify misinformation, control transmissions, and manage infodemics.
UR - http://www.scopus.com/inward/record.url?scp=85142458264&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85142458264&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85142458264
T3 - Proceedings of the Annual Hawaii International Conference on System Sciences
SP - 1470
EP - 1479
BT - Proceedings of the 55th Annual Hawaii International Conference on System Sciences, HICSS 2022
A2 - Bui, Tung X.
PB - IEEE Computer Society
T2 - 55th Annual Hawaii International Conference on System Sciences, HICSS 2022
Y2 - 3 January 2022 through 7 January 2022
ER -