TY - GEN
T1 - CMP-PIM
T2 - 55th Annual Design Automation Conference, DAC 2018
AU - Angiziy, Shaahin
AU - Hey, Zhezhi
AU - Rakin, Adnan Siraj
AU - Fan, Deliang
N1 - Funding Information:
6 CONCLUSION In this work, we proposed CMP-PIM as an energy-e cient and high-speed comparator-based processing-in-memory accelerator for neural network. CMP-PIM employed parallel computational memory sub-array as a fundamental processing unit based on SOT-MRAM design to process CMPNET. With almost similar inference accuracy on SVHN data-set, the CMP-PIM can get to ∼ 94× and 3× better energy e ciency compared to CNN and Local Binary CNN (LBCNN), respectively. Besides, it achieves 4.3× speed-up compared to CNN-baseline design with identical network configuration. ACKNOWLEDGEMENTS This work is supported in part by the National Science Foundation under Grant No. 1740126 and Semiconductor Research Corporation nCORE REFERENCES
Publisher Copyright:
© 2018 Association for Computing Machinery.
PY - 2018/6/24
Y1 - 2018/6/24
N2 - In this paper, an energy-efficient and high-speed comparator-based processing-in-memory accelerator (CMP-PIM) is proposed to efficiently execute a novel hardware-oriented comparator-based deep neural network called CMPNET. Inspired by local binary pattern feature extraction method combined with depthwise separable convolution, we first modify the existing Convolutional Neural Network (CNN) algorithm by replacing the computationally-intensive multiplications in convolution layers with more efficient and less complex comparison and addition. Then, we propose a CMP-PIM that employs parallel computational memory sub-array as a fundamental processing unit based on SOT-MRAM. We compare CMP-PIM accelerator performance on different data-sets with recent CNN accelerator designs. With the close inference accuracy on SVHN data-set, CMP-PIM can get ∼ 94× and 3× better energy efficiency compared to CNN and Local Binary CNN (LBCNN), respectively. Besides, it achieves 4.3× speed-up compared to CNN-baseline with identical network configuration.
AB - In this paper, an energy-efficient and high-speed comparator-based processing-in-memory accelerator (CMP-PIM) is proposed to efficiently execute a novel hardware-oriented comparator-based deep neural network called CMPNET. Inspired by local binary pattern feature extraction method combined with depthwise separable convolution, we first modify the existing Convolutional Neural Network (CNN) algorithm by replacing the computationally-intensive multiplications in convolution layers with more efficient and less complex comparison and addition. Then, we propose a CMP-PIM that employs parallel computational memory sub-array as a fundamental processing unit based on SOT-MRAM. We compare CMP-PIM accelerator performance on different data-sets with recent CNN accelerator designs. With the close inference accuracy on SVHN data-set, CMP-PIM can get ∼ 94× and 3× better energy efficiency compared to CNN and Local Binary CNN (LBCNN), respectively. Besides, it achieves 4.3× speed-up compared to CNN-baseline with identical network configuration.
UR - http://www.scopus.com/inward/record.url?scp=85053683815&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85053683815&partnerID=8YFLogxK
U2 - 10.1145/3195970.3196009
DO - 10.1145/3195970.3196009
M3 - Conference contribution
AN - SCOPUS:85053683815
SN - 9781450357005
T3 - Proceedings - Design Automation Conference
BT - Proceedings of the 55th Annual Design Automation Conference, DAC 2018
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 24 June 2018 through 29 June 2018
ER -