Abstract
We present an in-memory computing SRAM macro that computes XNOR-and-accumulate in binary/ternary deep neural networks on the bitline without row-by-row data access. It achieves 33X better energy and 300X better energy-delay product than digital ASIC, and also achieves significantly higher accuracy than prior in-SRAM computing macro (e.g., 98.3% vs. 90% for MNIST) by being able to support the mainstream DNN/CNN algorithms.
Original language | English (US) |
---|---|
Title of host publication | 2018 IEEE Symposium on VLSI Technology, VLSI Technology 2018 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 173-174 |
Number of pages | 2 |
Volume | 2018-June |
ISBN (Electronic) | 9781538642160 |
DOIs | |
State | Published - Oct 25 2018 |
Event | 38th IEEE Symposium on VLSI Technology, VLSI Technology 2018 - Honolulu, United States Duration: Jun 18 2018 → Jun 22 2018 |
Other
Other | 38th IEEE Symposium on VLSI Technology, VLSI Technology 2018 |
---|---|
Country/Territory | United States |
City | Honolulu |
Period | 6/18/18 → 6/22/18 |
ASJC Scopus subject areas
- Electrical and Electronic Engineering