XNOR-SRAM: In-memory computing SRAM macro for binary/ternary deep neural networks

Zhewei Jiang, Shihui Yin, Mingoo Seok, Jae-sun Seo

Research output: Chapter in Book/Report/Conference proceedingConference contribution

39 Scopus citations

Abstract

We present an in-memory computing SRAM macro that computes XNOR-and-accumulate in binary/ternary deep neural networks on the bitline without row-by-row data access. It achieves 33X better energy and 300X better energy-delay product than digital ASIC, and also achieves significantly higher accuracy than prior in-SRAM computing macro (e.g., 98.3% vs. 90% for MNIST) by being able to support the mainstream DNN/CNN algorithms.

Original languageEnglish (US)
Title of host publication2018 IEEE Symposium on VLSI Technology, VLSI Technology 2018
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages173-174
Number of pages2
Volume2018-June
ISBN (Electronic)9781538642160
DOIs
StatePublished - Oct 25 2018
Event38th IEEE Symposium on VLSI Technology, VLSI Technology 2018 - Honolulu, United States
Duration: Jun 18 2018Jun 22 2018

Other

Other38th IEEE Symposium on VLSI Technology, VLSI Technology 2018
CountryUnited States
CityHonolulu
Period6/18/186/22/18

ASJC Scopus subject areas

  • Electrical and Electronic Engineering

Fingerprint Dive into the research topics of 'XNOR-SRAM: In-memory computing SRAM macro for binary/ternary deep neural networks'. Together they form a unique fingerprint.

Cite this