System-Level Benchmarking of Chiplet-based IMC Architectures for Deep Neural Network Acceleration

Gokul Krishnan, Sumit K. Mandal, Chaitali Chakrabarti, Jae Sun Seo, Umit Ogras, Yu Cao

Research output: Contribution to journalConference articlepeer-review

Abstract

In-memory computing (IMC) on a large monolithic chip for deep learning faces area, yield, and fabrication cost challenges due to the ever-increasing model sizes. 2.5D or chiplet-based architectures integrate multiple small chiplets to form a large computing system, presenting a feasible solution to accelerate large deep learning models. In this work, we present a novel benchmarking tool, SIAM, to evaluate the performance of chiplet-based IMC architectures and explore different architectural configurations. SIAM integrates device, circuit, architecture, network-on-chip (NoC), network-on-package (NoP), and DRAM access models to benchmark an end-to-end system. SIAM supports multiple deep neural networks (DNNs), different architectural configurations, and efficient design space exploration. We demonstrate the effectiveness of SIAM by benchmarking state-of-the-art DNNs across different datasets.

Original languageEnglish (US)
JournalProceedings of International Conference on ASIC
DOIs
StatePublished - 2021
Event14th IEEE International Conference on ASIC, ASICON 2021 - Kunming, China
Duration: Oct 26 2021Oct 29 2021

ASJC Scopus subject areas

  • Hardware and Architecture
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'System-Level Benchmarking of Chiplet-based IMC Architectures for Deep Neural Network Acceleration'. Together they form a unique fingerprint.

Cite this