TY - GEN
T1 - Robust sparse regularization
T2 - 30th Great Lakes Symposium on VLSI, GLSVLSI 2020
AU - Rakin, Adrian Siraj
AU - He, Zhezhi
AU - Yang, Li
AU - Wang, Yanzhi
AU - Wang, Liqiang
AU - Fan, Deliang
N1 - Funding Information:
This work is supported in part by the National Science Foundation under Grant No.1931871.
Publisher Copyright:
© 2020 Association for Computing Machinery.
PY - 2020/9/7
Y1 - 2020/9/7
N2 - Deep Neural Network (DNN) trained by the gradient descent method is known to be vulnerable to maliciously perturbed adversarial input, aka. adversarial attack. As one of the countermeasures against adversarial attacks, increasing the model capacity for DNN robustness enhancement was discussed and reported as an effective approach by many recent works. In this work, we show that shrinking the model size through proper weight pruning can even be helpful to improve the DNN robustness under adversarial attack. For obtaining a simultaneously robust and compact DNN model, we propose a multi-objective training method called Robust Sparse Regularization (RSR), through the fusion of various regularization techniques, including channel-wise noise injection, lasso weight penalty, and adversarial training. We conduct extensive experiments to show the effectiveness of RSR against popular white-box (i.e., PGD and FGSM) and black-box attacks. Thanks to RSR, 85% weight connections of ResNet-18 can be pruned while still achieving 0.68% and 8.72% improvement in clean- and perturbed-data accuracy respectively on CIFAR-10 dataset, in comparison to its PGD adversarial training baseline.
AB - Deep Neural Network (DNN) trained by the gradient descent method is known to be vulnerable to maliciously perturbed adversarial input, aka. adversarial attack. As one of the countermeasures against adversarial attacks, increasing the model capacity for DNN robustness enhancement was discussed and reported as an effective approach by many recent works. In this work, we show that shrinking the model size through proper weight pruning can even be helpful to improve the DNN robustness under adversarial attack. For obtaining a simultaneously robust and compact DNN model, we propose a multi-objective training method called Robust Sparse Regularization (RSR), through the fusion of various regularization techniques, including channel-wise noise injection, lasso weight penalty, and adversarial training. We conduct extensive experiments to show the effectiveness of RSR against popular white-box (i.e., PGD and FGSM) and black-box attacks. Thanks to RSR, 85% weight connections of ResNet-18 can be pruned while still achieving 0.68% and 8.72% improvement in clean- and perturbed-data accuracy respectively on CIFAR-10 dataset, in comparison to its PGD adversarial training baseline.
KW - Adversarial Defense
KW - Robust
KW - Sparse
UR - http://www.scopus.com/inward/record.url?scp=85091295650&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85091295650&partnerID=8YFLogxK
U2 - 10.1145/3386263.3407651
DO - 10.1145/3386263.3407651
M3 - Conference contribution
AN - SCOPUS:85091295650
T3 - Proceedings of the ACM Great Lakes Symposium on VLSI, GLSVLSI
SP - 125
EP - 130
BT - GLSVLSI 2020 - Proceedings of the 2020 Great Lakes Symposium on VLSI
PB - Association for Computing Machinery
Y2 - 7 September 2020 through 9 September 2020
ER -