20 Citations (Scopus)

Abstract

Neuro-inspired architectures based on synaptic memory arrays have been proposed for on-chip acceleration of weighted sum and weight update in machine/deep learning algorithms. In this paper, we developed NeuroSim, a circuit-level macro model that estimates the area, latency, dynamic energy and leakage power to facilitate the design space exploration of neuro-inspired architectures with mainstream and emerging device technologies. NeuroSim provides flexible interface and a wide variety of design options at the circuit and device level. Therefore, NeuroSim can be used by neural networks as a supporting tool to provide circuit-level performance evaluation. With NeuroSim, an integrated framework can be built with hierarchical organization from the device level (synaptic device properties) to the circuit level (array architectures) and then to the algorithm level (neural network topology), enabling instruction-accurate evaluation on the learning accuracy as well as the circuit-level performance metrics at the run-time of online learning. Using multilayer perceptron (MLP) as a case-study algorithm, we investigated the impact of the “analog” emerging non-volatile memory (eNVM)’s “non-ideal” device properties and benchmarked the trade-offs between SRAM, digital and analog eNVM based architectures for online learning and offline classification.

Fingerprint

Benchmarking
Macros
Networks (circuits)
Data storage equipment
Neural networks
Static random access storage
Multilayer neural networks
Learning algorithms
Topology

Keywords

  • emerging non-volatile memory
  • machine learning
  • neural network
  • Neuromorphic computing
  • offline classification.
  • online learning
  • synaptic devices

ASJC Scopus subject areas

  • Software
  • Computer Graphics and Computer-Aided Design
  • Electrical and Electronic Engineering

Cite this

@article{fa5c12f683904048925223886b718eff,
title = "NeuroSim: A Circuit-Level Macro Model for Benchmarking Neuro-Inspired Architectures in Online Learning",
abstract = "Neuro-inspired architectures based on synaptic memory arrays have been proposed for on-chip acceleration of weighted sum and weight update in machine/deep learning algorithms. In this paper, we developed NeuroSim, a circuit-level macro model that estimates the area, latency, dynamic energy and leakage power to facilitate the design space exploration of neuro-inspired architectures with mainstream and emerging device technologies. NeuroSim provides flexible interface and a wide variety of design options at the circuit and device level. Therefore, NeuroSim can be used by neural networks as a supporting tool to provide circuit-level performance evaluation. With NeuroSim, an integrated framework can be built with hierarchical organization from the device level (synaptic device properties) to the circuit level (array architectures) and then to the algorithm level (neural network topology), enabling instruction-accurate evaluation on the learning accuracy as well as the circuit-level performance metrics at the run-time of online learning. Using multilayer perceptron (MLP) as a case-study algorithm, we investigated the impact of the “analog” emerging non-volatile memory (eNVM)’s “non-ideal” device properties and benchmarked the trade-offs between SRAM, digital and analog eNVM based architectures for online learning and offline classification.",
keywords = "emerging non-volatile memory, machine learning, neural network, Neuromorphic computing, offline classification., online learning, synaptic devices",
author = "Chen, {Pai Yu} and Xiaochen Peng and Shimeng Yu",
year = "2018",
month = "1",
day = "3",
doi = "10.1109/TCAD.2018.2789723",
language = "English (US)",
journal = "IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems",
issn = "0278-0070",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - JOUR

T1 - NeuroSim

T2 - A Circuit-Level Macro Model for Benchmarking Neuro-Inspired Architectures in Online Learning

AU - Chen, Pai Yu

AU - Peng, Xiaochen

AU - Yu, Shimeng

PY - 2018/1/3

Y1 - 2018/1/3

N2 - Neuro-inspired architectures based on synaptic memory arrays have been proposed for on-chip acceleration of weighted sum and weight update in machine/deep learning algorithms. In this paper, we developed NeuroSim, a circuit-level macro model that estimates the area, latency, dynamic energy and leakage power to facilitate the design space exploration of neuro-inspired architectures with mainstream and emerging device technologies. NeuroSim provides flexible interface and a wide variety of design options at the circuit and device level. Therefore, NeuroSim can be used by neural networks as a supporting tool to provide circuit-level performance evaluation. With NeuroSim, an integrated framework can be built with hierarchical organization from the device level (synaptic device properties) to the circuit level (array architectures) and then to the algorithm level (neural network topology), enabling instruction-accurate evaluation on the learning accuracy as well as the circuit-level performance metrics at the run-time of online learning. Using multilayer perceptron (MLP) as a case-study algorithm, we investigated the impact of the “analog” emerging non-volatile memory (eNVM)’s “non-ideal” device properties and benchmarked the trade-offs between SRAM, digital and analog eNVM based architectures for online learning and offline classification.

AB - Neuro-inspired architectures based on synaptic memory arrays have been proposed for on-chip acceleration of weighted sum and weight update in machine/deep learning algorithms. In this paper, we developed NeuroSim, a circuit-level macro model that estimates the area, latency, dynamic energy and leakage power to facilitate the design space exploration of neuro-inspired architectures with mainstream and emerging device technologies. NeuroSim provides flexible interface and a wide variety of design options at the circuit and device level. Therefore, NeuroSim can be used by neural networks as a supporting tool to provide circuit-level performance evaluation. With NeuroSim, an integrated framework can be built with hierarchical organization from the device level (synaptic device properties) to the circuit level (array architectures) and then to the algorithm level (neural network topology), enabling instruction-accurate evaluation on the learning accuracy as well as the circuit-level performance metrics at the run-time of online learning. Using multilayer perceptron (MLP) as a case-study algorithm, we investigated the impact of the “analog” emerging non-volatile memory (eNVM)’s “non-ideal” device properties and benchmarked the trade-offs between SRAM, digital and analog eNVM based architectures for online learning and offline classification.

KW - emerging non-volatile memory

KW - machine learning

KW - neural network

KW - Neuromorphic computing

KW - offline classification.

KW - online learning

KW - synaptic devices

UR - http://www.scopus.com/inward/record.url?scp=85040035665&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85040035665&partnerID=8YFLogxK

U2 - 10.1109/TCAD.2018.2789723

DO - 10.1109/TCAD.2018.2789723

M3 - Article

JO - IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems

JF - IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems

SN - 0278-0070

ER -