A 65nm 4Kb algorithm-dependent computing-in-memory SRAM unit-macro with 2.3ns and 55.8TOPS/W fully parallel product-sum operation for binary DNN edge processors

Win San Khwa, Jia Jing Chen, Jia Fang Li, Xin Si, En Yu Yang, Xiaoyu Sun, Rui Liu, Pai Yu Chen, Qiang Li, Shimeng Yu, Meng Fan Chang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

144 Scopus citations

Abstract

For deep-neural-network (DNN) processors [1-4], the product-sum (PS) operation predominates the computational workload for both convolution (CNVL) and fully-connect (FCNL) neural-network (NN) layers. This hinders the adoption of DNN processors to on the edge artificial-intelligence (AI) devices, which require low-power, low-cost and fast inference. Binary DNNs [5-6] are used to reduce computation and hardware costs for AI edge devices; however, a memory bottleneck still remains. In Fig. 31.5.1 conventional PE arrays exploit parallelized computation, but suffer from inefficient single-row SRAM access to weights and intermediate data. Computing-in-memory (CIM) improves efficiency by enabling parallel computing, reducing memory accesses, and suppressing intermediate data. Nonetheless, three critical challenges remain (Fig. 31.5.2), particularly for FCNL. We overcome these problems by co-optimizing the circuits and the system. Recently, researches have been focusing on XNOR based binary-DNN structures [6]. Although they achieve a slightly higher accuracy, than other binary structures, they require a significant hardware cost (i.e. 8T-12T SRAM) to implement a CIM system. To further reduce the hardware cost, by using 6T SRAM to implement a CIM system, we employ binary DNN with 0/1-neuron and ±1-weight that was proposed in [7]. We implemented a 65nm 4Kb algorithm-dependent CIM-SRAM unit-macro and in-house binary DNN structure (focusing on FCNL with a simplified PE array), for cost-aware DNN AI edge processors. This resulted in the first binary-based CIM-SRAM macro with the fastest (2.3ns) PS operation, and the highest energy-efficiency (55.8TOPS/W) among reported CIM macros [3-4].

Original languageEnglish (US)
Title of host publication2018 IEEE International Solid-State Circuits Conference, ISSCC 2018
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages496-498
Number of pages3
ISBN (Electronic)9781509049394
DOIs
StatePublished - Mar 8 2018
Event65th IEEE International Solid-State Circuits Conference, ISSCC 2018 - San Francisco, United States
Duration: Feb 11 2018Feb 15 2018

Publication series

NameDigest of Technical Papers - IEEE International Solid-State Circuits Conference
Volume61
ISSN (Print)0193-6530

Other

Other65th IEEE International Solid-State Circuits Conference, ISSCC 2018
Country/TerritoryUnited States
CitySan Francisco
Period2/11/182/15/18

ASJC Scopus subject areas

  • Electronic, Optical and Magnetic Materials
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'A 65nm 4Kb algorithm-dependent computing-in-memory SRAM unit-macro with 2.3ns and 55.8TOPS/W fully parallel product-sum operation for binary DNN edge processors'. Together they form a unique fingerprint.

Cite this