Benchmark of Ferroelectric Transistor Based Hybrid Precision Synapse for Neural Network Accelerator

Yandong Luo, Panni Wang, Xiaochen Peng, Xiaoyu Sun, Shimeng Yu

Research output: Contribution to journalArticle

Abstract

In-memory computing with analog non-volatile memories can accelerate the in-situ training of deep neural networks. Recently, we proposed a synaptic cell of a ferroelectric transistor (FeFET) with two CMOS transistors (2T-1F) that exploits the hybrid precision for training and inference, which overcomes the challenges of nonlinear and asymmetric weight update and achieves nearly software comparable training accuracy at the algorithm-level. In this paper, we further present the circuit-level benchmark results of this hybrid precision synapse in terms of area, latency and energy. The corresponding array architecture is presented and the array level operations are illustrated. The benchmark is conducted by MLP+ NeuroSim framework with comparison to other capacitor-assisted (e.g. 3T1C+2PCM) hybrid precision cell. The design trade-offs and scalability are discussed between different implementations.

Original languageEnglish (US)
JournalIEEE Journal on Exploratory Solid-State Computational Devices and Circuits
DOIs
StatePublished - Jan 1 2019
Externally publishedYes

Fingerprint

Ferroelectric materials
Particle accelerators
Transistors
Neural networks
Data storage equipment
Scalability
Capacitors
Networks (circuits)
Deep neural networks

Keywords

  • Benchmark
  • Ferroelectric transistor
  • In-memory computing
  • Neural network
  • Synaptic device

ASJC Scopus subject areas

  • Hardware and Architecture
  • Electrical and Electronic Engineering
  • Electronic, Optical and Magnetic Materials

Cite this

Benchmark of Ferroelectric Transistor Based Hybrid Precision Synapse for Neural Network Accelerator. / Luo, Yandong; Wang, Panni; Peng, Xiaochen; Sun, Xiaoyu; Yu, Shimeng.

In: IEEE Journal on Exploratory Solid-State Computational Devices and Circuits, 01.01.2019.

Research output: Contribution to journalArticle

@article{edc06333f713461d8bb7f0d6b97aea4e,
title = "Benchmark of Ferroelectric Transistor Based Hybrid Precision Synapse for Neural Network Accelerator",
abstract = "In-memory computing with analog non-volatile memories can accelerate the in-situ training of deep neural networks. Recently, we proposed a synaptic cell of a ferroelectric transistor (FeFET) with two CMOS transistors (2T-1F) that exploits the hybrid precision for training and inference, which overcomes the challenges of nonlinear and asymmetric weight update and achieves nearly software comparable training accuracy at the algorithm-level. In this paper, we further present the circuit-level benchmark results of this hybrid precision synapse in terms of area, latency and energy. The corresponding array architecture is presented and the array level operations are illustrated. The benchmark is conducted by MLP+ NeuroSim framework with comparison to other capacitor-assisted (e.g. 3T1C+2PCM) hybrid precision cell. The design trade-offs and scalability are discussed between different implementations.",
keywords = "Benchmark, Ferroelectric transistor, In-memory computing, Neural network, Synaptic device",
author = "Yandong Luo and Panni Wang and Xiaochen Peng and Xiaoyu Sun and Shimeng Yu",
year = "2019",
month = "1",
day = "1",
doi = "10.1109/JXCDC.2019.2925061",
language = "English (US)",
journal = "IEEE Journal on Exploratory Solid-State Computational Devices and Circuits",
issn = "2329-9231",

}

TY - JOUR

T1 - Benchmark of Ferroelectric Transistor Based Hybrid Precision Synapse for Neural Network Accelerator

AU - Luo, Yandong

AU - Wang, Panni

AU - Peng, Xiaochen

AU - Sun, Xiaoyu

AU - Yu, Shimeng

PY - 2019/1/1

Y1 - 2019/1/1

N2 - In-memory computing with analog non-volatile memories can accelerate the in-situ training of deep neural networks. Recently, we proposed a synaptic cell of a ferroelectric transistor (FeFET) with two CMOS transistors (2T-1F) that exploits the hybrid precision for training and inference, which overcomes the challenges of nonlinear and asymmetric weight update and achieves nearly software comparable training accuracy at the algorithm-level. In this paper, we further present the circuit-level benchmark results of this hybrid precision synapse in terms of area, latency and energy. The corresponding array architecture is presented and the array level operations are illustrated. The benchmark is conducted by MLP+ NeuroSim framework with comparison to other capacitor-assisted (e.g. 3T1C+2PCM) hybrid precision cell. The design trade-offs and scalability are discussed between different implementations.

AB - In-memory computing with analog non-volatile memories can accelerate the in-situ training of deep neural networks. Recently, we proposed a synaptic cell of a ferroelectric transistor (FeFET) with two CMOS transistors (2T-1F) that exploits the hybrid precision for training and inference, which overcomes the challenges of nonlinear and asymmetric weight update and achieves nearly software comparable training accuracy at the algorithm-level. In this paper, we further present the circuit-level benchmark results of this hybrid precision synapse in terms of area, latency and energy. The corresponding array architecture is presented and the array level operations are illustrated. The benchmark is conducted by MLP+ NeuroSim framework with comparison to other capacitor-assisted (e.g. 3T1C+2PCM) hybrid precision cell. The design trade-offs and scalability are discussed between different implementations.

KW - Benchmark

KW - Ferroelectric transistor

KW - In-memory computing

KW - Neural network

KW - Synaptic device

UR - http://www.scopus.com/inward/record.url?scp=85068195628&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85068195628&partnerID=8YFLogxK

U2 - 10.1109/JXCDC.2019.2925061

DO - 10.1109/JXCDC.2019.2925061

M3 - Article

JO - IEEE Journal on Exploratory Solid-State Computational Devices and Circuits

JF - IEEE Journal on Exploratory Solid-State Computational Devices and Circuits

SN - 2329-9231

ER -