Exploring Model Stability of Deep Neural Networks for Reliable RRAM-Based In-Memory Acceleration

Gokul Krishnan, Li Yang, Jingbo Sun, Jubin Hazra, Xiaocong Du, Maximilian Liehr, Zheng Li, Karsten Beckmann, Rajiv V. Joshi, Nathaniel C. Cady, Deliang Fan, Yu Cao

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

RRAM-based in-memory computing (IMC) effectively accelerates deep neural networks (DNNs). Furthermore, model compression techniques, such as quantization and pruning, are necessary to improve algorithm mapping and hardware performance. However, in the presence of RRAM device variations, low-precision and sparse DNNs suffer from severe post-mapping accuracy loss. To address this, in this work, we investigate a new metric, model stability, from the loss landscape to help shed light on accuracy loss under variations and model compression, which guides an algorithmic solution to maximize model stability and mitigate accuracy loss. Based on statistical data from a CMOS/RRAM 1T1R test chip at 65nm, we characterize wafer-level RRAM variations and develop a cross-layer benchmark tool that incorporates quantization, pruning, device variations, model stability, and IMC architecture parameters to assess post-mapping accuracy and hardware performance. Leveraging this tool, we show that a loss-landscape-based DNN model selection for stability effectively tolerates device variations and achieves a post-mapping accuracy higher than that with 50% lower RRAM variations. Moreover, we quantitatively interpret why model pruning increases the sensitivity to variations, while a lower-precision model has better tolerance to variations. Finally, we propose a novel variation-aware training method to improve model stability, in which there exists the most stable model for the best post-mapping accuracy of compressed DNNs. Experimental evaluation of the method shows up to 19%, 21%, and 11% post-mapping accuracy improvement for our 65nm RRAM device, across various precision and sparsity, on CIFAR-10, CIFAR-100, and SVHN datasets, respectively.

Original languageEnglish (US)
Pages (from-to)2740-2752
Number of pages13
JournalIEEE Transactions on Computers
Volume71
Issue number11
DOIs
StatePublished - Nov 1 2022

Keywords

  • In-memory computing
  • RRAM
  • deep neural networks
  • model stability
  • pruning
  • quantization
  • reliability

ASJC Scopus subject areas

  • Software
  • Theoretical Computer Science
  • Hardware and Architecture
  • Computational Theory and Mathematics

Fingerprint

Dive into the research topics of 'Exploring Model Stability of Deep Neural Networks for Reliable RRAM-Based In-Memory Acceleration'. Together they form a unique fingerprint.

Cite this