Fully parallel RRAM synaptic array for implementing binary neural network with (+1, -1) weights and (+1, 0) neurons

Xiaoyu Sun, Xiaochen Peng, Pai Yu Chen, Rui Liu, Jae-sun Seo, Shimeng Yu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

15 Citations (Scopus)

Abstract

Binary Neural Networks (BNNs) have been recently proposed to improve the area-/energy-efficiency of the machine/deep learning hardware accelerators, which opens an opportunity to use the technologically more mature binary RRAM devices to effectively implement the binary synaptic weights. In addition, the binary neuron activation enables using the sense amplifier instead of the analog-to-digital converter to allow bitwise communication between layers of the neural networks. However, the sense amplifier has intrinsic offset that affects the threshold of binary neuron, thus it may degrade the classification accuracy. In this work, we analyze a fully parallel RRAM synaptic array architecture that implements the fully connected layers in a convolutional neural network with (+1, -1) weights and (+1, 0) neurons. The simulation results with TSMC 65 nm PDK show that the offset of current mode sense amplifier introduces a slight accuracy loss from ∼98.5% to ∼97.6% for MNIST dataset. Nevertheless, the proposed fully parallel BNN architecture (P-BNN) can achieve 137.35 TOPS/W energy efficiency for the inference, improved by ∼20X compared to the sequential BNN architecture (S-BNN) with row-by-row read-out scheme. Moreover, the proposed P-BNN architecture can save the chip area by ∼16% as it eliminates the area overhead of MAC peripheral units in the S-BNN architecture.

Original languageEnglish (US)
Title of host publicationASP-DAC 2018 - 23rd Asia and South Pacific Design Automation Conference, Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages574-579
Number of pages6
Volume2018-January
ISBN (Electronic)9781509006021
DOIs
StatePublished - Feb 20 2018
Event23rd Asia and South Pacific Design Automation Conference, ASP-DAC 2018 - Jeju, Korea, Republic of
Duration: Jan 22 2018Jan 25 2018

Other

Other23rd Asia and South Pacific Design Automation Conference, ASP-DAC 2018
CountryKorea, Republic of
CityJeju
Period1/22/181/25/18

Fingerprint

Neurons
Neural networks
Network architecture
Energy efficiency
Parallel architectures
Digital to analog conversion
Particle accelerators
Chemical activation
RRAM
Hardware
Communication

ASJC Scopus subject areas

  • Electrical and Electronic Engineering
  • Computer Science Applications
  • Computer Graphics and Computer-Aided Design

Cite this

Sun, X., Peng, X., Chen, P. Y., Liu, R., Seo, J., & Yu, S. (2018). Fully parallel RRAM synaptic array for implementing binary neural network with (+1, -1) weights and (+1, 0) neurons. In ASP-DAC 2018 - 23rd Asia and South Pacific Design Automation Conference, Proceedings (Vol. 2018-January, pp. 574-579). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ASPDAC.2018.8297384

Fully parallel RRAM synaptic array for implementing binary neural network with (+1, -1) weights and (+1, 0) neurons. / Sun, Xiaoyu; Peng, Xiaochen; Chen, Pai Yu; Liu, Rui; Seo, Jae-sun; Yu, Shimeng.

ASP-DAC 2018 - 23rd Asia and South Pacific Design Automation Conference, Proceedings. Vol. 2018-January Institute of Electrical and Electronics Engineers Inc., 2018. p. 574-579.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Sun, X, Peng, X, Chen, PY, Liu, R, Seo, J & Yu, S 2018, Fully parallel RRAM synaptic array for implementing binary neural network with (+1, -1) weights and (+1, 0) neurons. in ASP-DAC 2018 - 23rd Asia and South Pacific Design Automation Conference, Proceedings. vol. 2018-January, Institute of Electrical and Electronics Engineers Inc., pp. 574-579, 23rd Asia and South Pacific Design Automation Conference, ASP-DAC 2018, Jeju, Korea, Republic of, 1/22/18. https://doi.org/10.1109/ASPDAC.2018.8297384
Sun X, Peng X, Chen PY, Liu R, Seo J, Yu S. Fully parallel RRAM synaptic array for implementing binary neural network with (+1, -1) weights and (+1, 0) neurons. In ASP-DAC 2018 - 23rd Asia and South Pacific Design Automation Conference, Proceedings. Vol. 2018-January. Institute of Electrical and Electronics Engineers Inc. 2018. p. 574-579 https://doi.org/10.1109/ASPDAC.2018.8297384
Sun, Xiaoyu ; Peng, Xiaochen ; Chen, Pai Yu ; Liu, Rui ; Seo, Jae-sun ; Yu, Shimeng. / Fully parallel RRAM synaptic array for implementing binary neural network with (+1, -1) weights and (+1, 0) neurons. ASP-DAC 2018 - 23rd Asia and South Pacific Design Automation Conference, Proceedings. Vol. 2018-January Institute of Electrical and Electronics Engineers Inc., 2018. pp. 574-579
@inproceedings{179ea3ebe1f84e7baec618008e91527f,
title = "Fully parallel RRAM synaptic array for implementing binary neural network with (+1, -1) weights and (+1, 0) neurons",
abstract = "Binary Neural Networks (BNNs) have been recently proposed to improve the area-/energy-efficiency of the machine/deep learning hardware accelerators, which opens an opportunity to use the technologically more mature binary RRAM devices to effectively implement the binary synaptic weights. In addition, the binary neuron activation enables using the sense amplifier instead of the analog-to-digital converter to allow bitwise communication between layers of the neural networks. However, the sense amplifier has intrinsic offset that affects the threshold of binary neuron, thus it may degrade the classification accuracy. In this work, we analyze a fully parallel RRAM synaptic array architecture that implements the fully connected layers in a convolutional neural network with (+1, -1) weights and (+1, 0) neurons. The simulation results with TSMC 65 nm PDK show that the offset of current mode sense amplifier introduces a slight accuracy loss from ∼98.5{\%} to ∼97.6{\%} for MNIST dataset. Nevertheless, the proposed fully parallel BNN architecture (P-BNN) can achieve 137.35 TOPS/W energy efficiency for the inference, improved by ∼20X compared to the sequential BNN architecture (S-BNN) with row-by-row read-out scheme. Moreover, the proposed P-BNN architecture can save the chip area by ∼16{\%} as it eliminates the area overhead of MAC peripheral units in the S-BNN architecture.",
author = "Xiaoyu Sun and Xiaochen Peng and Chen, {Pai Yu} and Rui Liu and Jae-sun Seo and Shimeng Yu",
year = "2018",
month = "2",
day = "20",
doi = "10.1109/ASPDAC.2018.8297384",
language = "English (US)",
volume = "2018-January",
pages = "574--579",
booktitle = "ASP-DAC 2018 - 23rd Asia and South Pacific Design Automation Conference, Proceedings",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - GEN

T1 - Fully parallel RRAM synaptic array for implementing binary neural network with (+1, -1) weights and (+1, 0) neurons

AU - Sun, Xiaoyu

AU - Peng, Xiaochen

AU - Chen, Pai Yu

AU - Liu, Rui

AU - Seo, Jae-sun

AU - Yu, Shimeng

PY - 2018/2/20

Y1 - 2018/2/20

N2 - Binary Neural Networks (BNNs) have been recently proposed to improve the area-/energy-efficiency of the machine/deep learning hardware accelerators, which opens an opportunity to use the technologically more mature binary RRAM devices to effectively implement the binary synaptic weights. In addition, the binary neuron activation enables using the sense amplifier instead of the analog-to-digital converter to allow bitwise communication between layers of the neural networks. However, the sense amplifier has intrinsic offset that affects the threshold of binary neuron, thus it may degrade the classification accuracy. In this work, we analyze a fully parallel RRAM synaptic array architecture that implements the fully connected layers in a convolutional neural network with (+1, -1) weights and (+1, 0) neurons. The simulation results with TSMC 65 nm PDK show that the offset of current mode sense amplifier introduces a slight accuracy loss from ∼98.5% to ∼97.6% for MNIST dataset. Nevertheless, the proposed fully parallel BNN architecture (P-BNN) can achieve 137.35 TOPS/W energy efficiency for the inference, improved by ∼20X compared to the sequential BNN architecture (S-BNN) with row-by-row read-out scheme. Moreover, the proposed P-BNN architecture can save the chip area by ∼16% as it eliminates the area overhead of MAC peripheral units in the S-BNN architecture.

AB - Binary Neural Networks (BNNs) have been recently proposed to improve the area-/energy-efficiency of the machine/deep learning hardware accelerators, which opens an opportunity to use the technologically more mature binary RRAM devices to effectively implement the binary synaptic weights. In addition, the binary neuron activation enables using the sense amplifier instead of the analog-to-digital converter to allow bitwise communication between layers of the neural networks. However, the sense amplifier has intrinsic offset that affects the threshold of binary neuron, thus it may degrade the classification accuracy. In this work, we analyze a fully parallel RRAM synaptic array architecture that implements the fully connected layers in a convolutional neural network with (+1, -1) weights and (+1, 0) neurons. The simulation results with TSMC 65 nm PDK show that the offset of current mode sense amplifier introduces a slight accuracy loss from ∼98.5% to ∼97.6% for MNIST dataset. Nevertheless, the proposed fully parallel BNN architecture (P-BNN) can achieve 137.35 TOPS/W energy efficiency for the inference, improved by ∼20X compared to the sequential BNN architecture (S-BNN) with row-by-row read-out scheme. Moreover, the proposed P-BNN architecture can save the chip area by ∼16% as it eliminates the area overhead of MAC peripheral units in the S-BNN architecture.

UR - http://www.scopus.com/inward/record.url?scp=85045323481&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85045323481&partnerID=8YFLogxK

U2 - 10.1109/ASPDAC.2018.8297384

DO - 10.1109/ASPDAC.2018.8297384

M3 - Conference contribution

AN - SCOPUS:85045323481

VL - 2018-January

SP - 574

EP - 579

BT - ASP-DAC 2018 - 23rd Asia and South Pacific Design Automation Conference, Proceedings

PB - Institute of Electrical and Electronics Engineers Inc.

ER -