TY - GEN
T1 - Exploiting Hybrid Precision for Training and Inference
T2 - 64th Annual IEEE International Electron Devices Meeting, IEDM 2018
AU - Sun, Xiaoyu
AU - Wang, Panni
AU - Ni, Kai
AU - Datta, Suman
AU - Yu, Shimeng
N1 - Funding Information:
ACKNOWLEDGMENT: This work is supported by ASCENT, one of the six SRC/DARPA JUMP centers.
Publisher Copyright:
© 2018 IEEE.
PY - 2019/1/16
Y1 - 2019/1/16
N2 - In-memory computing with analog non-volatile memories (NVMs) can accelerate both the in-situ training and inference of deep neural networks (DNNs) by parallelizing multiply-accumulate (MAC) operations in the analog domain. However, the in-situ training accuracy suffers from unacceptable degradation due to undesired weight-update asymmetry/nonlinearity and limited bit precision. In this work, we overcome this challenge by introducing a compact Ferroelectric FET (FeFET) based synaptic cell that exploits hybrid precision for in-situ training and inference. We propose a novel hybrid approach where we use modulated 'volatile' gate voltage of FeFET to represent the least significant bits (LSBs) for symmetric/linear update during training only, and use 'non-volatile' polarization states of FeFET to hold the information of most significant bits (MSBs) for inference. This design is demonstrated by the experimentally validated FeFET SPICE model and cosimulation with the TensorFlow framework. The results show that with the proposed 6-bit and 7-bit synapse design, the insitu training accuracy can achieve ∼97.3% on MNIST dataset and ∼87% on CIFAR-10 dataset, respectively, approaching the ideal software based training.
AB - In-memory computing with analog non-volatile memories (NVMs) can accelerate both the in-situ training and inference of deep neural networks (DNNs) by parallelizing multiply-accumulate (MAC) operations in the analog domain. However, the in-situ training accuracy suffers from unacceptable degradation due to undesired weight-update asymmetry/nonlinearity and limited bit precision. In this work, we overcome this challenge by introducing a compact Ferroelectric FET (FeFET) based synaptic cell that exploits hybrid precision for in-situ training and inference. We propose a novel hybrid approach where we use modulated 'volatile' gate voltage of FeFET to represent the least significant bits (LSBs) for symmetric/linear update during training only, and use 'non-volatile' polarization states of FeFET to hold the information of most significant bits (MSBs) for inference. This design is demonstrated by the experimentally validated FeFET SPICE model and cosimulation with the TensorFlow framework. The results show that with the proposed 6-bit and 7-bit synapse design, the insitu training accuracy can achieve ∼97.3% on MNIST dataset and ∼87% on CIFAR-10 dataset, respectively, approaching the ideal software based training.
UR - http://www.scopus.com/inward/record.url?scp=85061795371&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85061795371&partnerID=8YFLogxK
U2 - 10.1109/IEDM.2018.8614611
DO - 10.1109/IEDM.2018.8614611
M3 - Conference contribution
AN - SCOPUS:85061795371
T3 - Technical Digest - International Electron Devices Meeting, IEDM
SP - 3.1.1-3.1.4
BT - 2018 IEEE International Electron Devices Meeting, IEDM 2018
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 1 December 2018 through 5 December 2018
ER -