Impact of On-chip Interconnect on In-memory Acceleration of Deep Neural Networks

Gokul Krishnan, Sumit K. Mandal, Chaitali Chakrabarti, Jae Sun Seo, Umit Y. Ogras, Yu Cao

Research output: Contribution to journalArticlepeer-review

6 Scopus citations

Abstract

With the widespread use of Deep Neural Networks (DNNs), machine learning algorithms have evolved in two diverse directions - one with ever-increasing connection density for better accuracy and the other with more compact sizing for energy efficiency. The increase in connection density increases on-chip data movement, which makes efficient on-chip communication a critical function of the DNN accelerator. The contribution of this work is threefold. First, we illustrate that the point-to-point (P2P)-based interconnect is incapable of handling a high volume of on-chip data movement for DNNs. Second, we evaluate P2P and network-on-chip (NoC) interconnect (with a regular topology such as a mesh) for SRAM- and ReRAM-based in-memory computing (IMC) architectures for a range of DNNs. This analysis shows the necessity for the optimal interconnect choice for an IMC DNN accelerator. Finally, we perform an experimental evaluation for different DNNs to empirically obtain the performance of the IMC architecture with both NoC-tree and NoC-mesh. We conclude that, at the tile level, NoC-tree is appropriate for compact DNNs employed at the edge, and NoC-mesh is necessary to accelerate DNNs with high connection density. Furthermore, we propose a technique to determine the optimal choice of interconnect for any given DNN. In this technique, we use analytical models of NoC to evaluate end-to-end communication latency of any given DNN. We demonstrate that the interconnect optimization in the IMC architecture results in up to 6 × improvement in energy-delay-area product for VGG-19 inference compared to the state-of-the-art ReRAM-based IMC architectures.

Original languageEnglish (US)
Article number34
JournalACM Journal on Emerging Technologies in Computing Systems
Volume18
Issue number2
DOIs
StatePublished - Apr 2022

Keywords

  • DNN acceleration
  • In-memory computing
  • RRAM
  • connection density
  • deep neural networks
  • machine learning
  • network-on-chip

ASJC Scopus subject areas

  • Software
  • Hardware and Architecture
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Impact of On-chip Interconnect on In-memory Acceleration of Deep Neural Networks'. Together they form a unique fingerprint.

Cite this