Defending bit-flip attack through DNN weight reconstruction

Jingtao Li, Adnan Siraj Rakin, Yan Xiong, Liangliang Chang, Zhezhi He, Deliang Fan, Chaitali Chakrabarti

Research output: Chapter in Book/Report/Conference proceedingConference contribution

13 Scopus citations


Recent studies show that adversarial attacks on neural network weights, aka, Bit-Flip Attack (BFA), can degrade Deep Neural Network's (DNN) prediction accuracy severely. In this work, we propose a novel weight reconstruction method as a countermeasure to such BFAs. Specifically, during inference, the weights are reconstructed such that the weight perturbation due to BFA is minimized or diffused to the neighboring weights. We have successfully demonstrated that our method can significantly improve the DNN robustness against random and gradient-based BFA variants. Even under the most aggressive attacks (i.e., greedy progressive bit search), our method maintains a test accuracy of 60% on ImageNet after 5 iterations while the baseline accuracy drops to below 1%.

Original languageEnglish (US)
Title of host publication2020 57th ACM/IEEE Design Automation Conference, DAC 2020
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781450367257
StatePublished - Jul 2020
Event57th ACM/IEEE Design Automation Conference, DAC 2020 - Virtual, San Francisco, United States
Duration: Jul 20 2020Jul 24 2020

Publication series

NameProceedings - Design Automation Conference
ISSN (Print)0738-100X


Conference57th ACM/IEEE Design Automation Conference, DAC 2020
Country/TerritoryUnited States
CityVirtual, San Francisco


  • Bit-Flip Attack
  • Row-Hammer Attack
  • Security of Deep Neural Network

ASJC Scopus subject areas

  • Computer Science Applications
  • Control and Systems Engineering
  • Electrical and Electronic Engineering
  • Modeling and Simulation


Dive into the research topics of 'Defending bit-flip attack through DNN weight reconstruction'. Together they form a unique fingerprint.

Cite this