XNOR-SRAM: In-memory computing SRAM macro for binary/ternary deep neural networks

Zhewei Jiang, Shihui Yin, Mingoo Seok, Jae-sun Seo

Research output: Chapter in Book/Report/Conference proceedingConference contribution

13 Citations (Scopus)

Abstract

We present an in-memory computing SRAM macro that computes XNOR-and-accumulate in binary/ternary deep neural networks on the bitline without row-by-row data access. It achieves 33X better energy and 300X better energy-delay product than digital ASIC, and also achieves significantly higher accuracy than prior in-SRAM computing macro (e.g., 98.3% vs. 90% for MNIST) by being able to support the mainstream DNN/CNN algorithms.

Original languageEnglish (US)
Title of host publication2018 IEEE Symposium on VLSI Technology, VLSI Technology 2018
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages173-174
Number of pages2
Volume2018-June
ISBN (Electronic)9781538642160
DOIs
StatePublished - Oct 25 2018
Event38th IEEE Symposium on VLSI Technology, VLSI Technology 2018 - Honolulu, United States
Duration: Jun 18 2018Jun 22 2018

Other

Other38th IEEE Symposium on VLSI Technology, VLSI Technology 2018
CountryUnited States
CityHonolulu
Period6/18/186/22/18

Fingerprint

Static random access storage
Macros
Data storage equipment
Application specific integrated circuits
Deep neural networks

ASJC Scopus subject areas

  • Electrical and Electronic Engineering

Cite this

Jiang, Z., Yin, S., Seok, M., & Seo, J. (2018). XNOR-SRAM: In-memory computing SRAM macro for binary/ternary deep neural networks. In 2018 IEEE Symposium on VLSI Technology, VLSI Technology 2018 (Vol. 2018-June, pp. 173-174). [8510687] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/VLSIT.2018.8510687

XNOR-SRAM : In-memory computing SRAM macro for binary/ternary deep neural networks. / Jiang, Zhewei; Yin, Shihui; Seok, Mingoo; Seo, Jae-sun.

2018 IEEE Symposium on VLSI Technology, VLSI Technology 2018. Vol. 2018-June Institute of Electrical and Electronics Engineers Inc., 2018. p. 173-174 8510687.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Jiang, Z, Yin, S, Seok, M & Seo, J 2018, XNOR-SRAM: In-memory computing SRAM macro for binary/ternary deep neural networks. in 2018 IEEE Symposium on VLSI Technology, VLSI Technology 2018. vol. 2018-June, 8510687, Institute of Electrical and Electronics Engineers Inc., pp. 173-174, 38th IEEE Symposium on VLSI Technology, VLSI Technology 2018, Honolulu, United States, 6/18/18. https://doi.org/10.1109/VLSIT.2018.8510687
Jiang Z, Yin S, Seok M, Seo J. XNOR-SRAM: In-memory computing SRAM macro for binary/ternary deep neural networks. In 2018 IEEE Symposium on VLSI Technology, VLSI Technology 2018. Vol. 2018-June. Institute of Electrical and Electronics Engineers Inc. 2018. p. 173-174. 8510687 https://doi.org/10.1109/VLSIT.2018.8510687
Jiang, Zhewei ; Yin, Shihui ; Seok, Mingoo ; Seo, Jae-sun. / XNOR-SRAM : In-memory computing SRAM macro for binary/ternary deep neural networks. 2018 IEEE Symposium on VLSI Technology, VLSI Technology 2018. Vol. 2018-June Institute of Electrical and Electronics Engineers Inc., 2018. pp. 173-174
@inproceedings{316dfc9b379a41b48e9d7f53a76ccfa3,
title = "XNOR-SRAM: In-memory computing SRAM macro for binary/ternary deep neural networks",
abstract = "We present an in-memory computing SRAM macro that computes XNOR-and-accumulate in binary/ternary deep neural networks on the bitline without row-by-row data access. It achieves 33X better energy and 300X better energy-delay product than digital ASIC, and also achieves significantly higher accuracy than prior in-SRAM computing macro (e.g., 98.3{\%} vs. 90{\%} for MNIST) by being able to support the mainstream DNN/CNN algorithms.",
author = "Zhewei Jiang and Shihui Yin and Mingoo Seok and Jae-sun Seo",
year = "2018",
month = "10",
day = "25",
doi = "10.1109/VLSIT.2018.8510687",
language = "English (US)",
volume = "2018-June",
pages = "173--174",
booktitle = "2018 IEEE Symposium on VLSI Technology, VLSI Technology 2018",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - GEN

T1 - XNOR-SRAM

T2 - In-memory computing SRAM macro for binary/ternary deep neural networks

AU - Jiang, Zhewei

AU - Yin, Shihui

AU - Seok, Mingoo

AU - Seo, Jae-sun

PY - 2018/10/25

Y1 - 2018/10/25

N2 - We present an in-memory computing SRAM macro that computes XNOR-and-accumulate in binary/ternary deep neural networks on the bitline without row-by-row data access. It achieves 33X better energy and 300X better energy-delay product than digital ASIC, and also achieves significantly higher accuracy than prior in-SRAM computing macro (e.g., 98.3% vs. 90% for MNIST) by being able to support the mainstream DNN/CNN algorithms.

AB - We present an in-memory computing SRAM macro that computes XNOR-and-accumulate in binary/ternary deep neural networks on the bitline without row-by-row data access. It achieves 33X better energy and 300X better energy-delay product than digital ASIC, and also achieves significantly higher accuracy than prior in-SRAM computing macro (e.g., 98.3% vs. 90% for MNIST) by being able to support the mainstream DNN/CNN algorithms.

UR - http://www.scopus.com/inward/record.url?scp=85056839915&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85056839915&partnerID=8YFLogxK

U2 - 10.1109/VLSIT.2018.8510687

DO - 10.1109/VLSIT.2018.8510687

M3 - Conference contribution

AN - SCOPUS:85056839915

VL - 2018-June

SP - 173

EP - 174

BT - 2018 IEEE Symposium on VLSI Technology, VLSI Technology 2018

PB - Institute of Electrical and Electronics Engineers Inc.

ER -