TY - GEN
T1 - Leveraging Noise and Aggressive Quantization of In-Memory Computing for Robust DNN Hardware against Adversarial Input and Weight Attacks
AU - Cherupally, Sai Kiran
AU - Rakin, Adnan Siraj
AU - Yin, Shihui
AU - Seok, Mingoo
AU - Fan, Deliang
AU - Seo, Jae Sun
N1 - Funding Information:
ACKNOWLEDGMENT This work is partially supported by NSF grants 1652866, 1715443, 2005209, and 2019548, and C-BRIC, one of six centers in JUMP, a SRC program sponsored by DARPA.
Publisher Copyright:
© 2021 IEEE.
PY - 2021/12/5
Y1 - 2021/12/5
N2 - In-memory computing (IMC) substantially improves the energy efficiency of deep neural network (DNNs) hardware by activating many rows together and performing analog computing. The noisy analog IMC induces some amount of accuracy drop in hardware acceleration, which is generally considered as a negative effect. However, in this work, we discover that such hardware intrinsic noise can, on the contrary, play a positive role in enhancing adversarial robustness. To achieve that, we propose a new DNN training scheme that integrates measured IMC hardware noise and aggressive partial sum quantization at the IMC crossbar. We show that this effectively improves the robustness of IMC DNN hardware against both adversarial input and weight attacks. Against black-box adversarial input attacks and bit-flip weight attacks, DNN robustness has improved by up to 10.5% (CFAR-10 accuracy) and 33.6% (number of bit-flips), respectively, compared to conventional DNNs.
AB - In-memory computing (IMC) substantially improves the energy efficiency of deep neural network (DNNs) hardware by activating many rows together and performing analog computing. The noisy analog IMC induces some amount of accuracy drop in hardware acceleration, which is generally considered as a negative effect. However, in this work, we discover that such hardware intrinsic noise can, on the contrary, play a positive role in enhancing adversarial robustness. To achieve that, we propose a new DNN training scheme that integrates measured IMC hardware noise and aggressive partial sum quantization at the IMC crossbar. We show that this effectively improves the robustness of IMC DNN hardware against both adversarial input and weight attacks. Against black-box adversarial input attacks and bit-flip weight attacks, DNN robustness has improved by up to 10.5% (CFAR-10 accuracy) and 33.6% (number of bit-flips), respectively, compared to conventional DNNs.
KW - adversarial attack
KW - adversarial robustness
KW - in-memory computing
KW - low-precision quantization
KW - noise injection
UR - http://www.scopus.com/inward/record.url?scp=85119418872&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85119418872&partnerID=8YFLogxK
U2 - 10.1109/DAC18074.2021.9586233
DO - 10.1109/DAC18074.2021.9586233
M3 - Conference contribution
AN - SCOPUS:85119418872
T3 - Proceedings - Design Automation Conference
SP - 559
EP - 564
BT - 2021 58th ACM/IEEE Design Automation Conference, DAC 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 58th ACM/IEEE Design Automation Conference, DAC 2021
Y2 - 5 December 2021 through 9 December 2021
ER -