Optimizing weight mapping and data flow for convolutional neural networks on rram based processing-in-memory architecture

Xiaochen Peng, Rui Liu, Shimeng Yu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

61 Scopus citations

Abstract

Resistive random access memory (RRAM) based array architecture has been proposed for on-chip acceleration of convolutional neural networks (CNNs), where the array could be configured for dot-product computation in a parallel fashion by summing up the column currents. Prior processing-in-memory (PIM) designs unroll each 3D kernel of the convolutional layers into a vertical column of a large weight matrix, where the input data will be accessed multiple times. As a result, significant latency and energy are consumed in interconnect and buffer. In this paper, in order to maximize both weight and input data reuse for RRAM based PIM architecture, we propose a novel weight mapping method and the corresponding data flow which divides the kernels and assign the input data into different processing-elements (PEs) according to their spatial locations. The proposed design achieves ~65% save in latency and energy for interconnect and buffer, and yields overall 2.1× speed up and ~17% improvement in the energy efficiency in terms of TOPS/W for VGG-16 CNN, compared with the prior design based on the conventional mapping method.

Original languageEnglish (US)
Title of host publication2019 IEEE International Symposium on Circuits and Systems, ISCAS 2019 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781728103976
DOIs
StatePublished - 2019
Event2019 IEEE International Symposium on Circuits and Systems, ISCAS 2019 - Sapporo, Japan
Duration: May 26 2019May 29 2019

Publication series

NameProceedings - IEEE International Symposium on Circuits and Systems
Volume2019-May
ISSN (Print)0271-4310

Conference

Conference2019 IEEE International Symposium on Circuits and Systems, ISCAS 2019
Country/TerritoryJapan
CitySapporo
Period5/26/195/29/19

Keywords

  • Deep neural network
  • Hardware accelerator
  • Machine learning
  • Non-volatile memory
  • Processing-in-memory

ASJC Scopus subject areas

  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Optimizing weight mapping and data flow for convolutional neural networks on rram based processing-in-memory architecture'. Together they form a unique fingerprint.

Cite this