Benchmark of Ferroelectric Transistor-Based Hybrid Precision Synapse for Neural Network Accelerator

Yandong Luo, Panni Wang, Xiaochen Peng, Xiaoyu Sun, Shimeng Yu

Research output: Contribution to journalArticlepeer-review

17 Scopus citations

Abstract

In-memory computing with analog nonvolatile memories can accelerate the in situ training of deep neural networks. Recently, we proposed a synaptic cell of a ferroelectric transistor (FeFET) with two CMOS transistors (2T1F) that exploit the hybrid precision for training and inference, which overcomes the challenges of nonlinear and asymmetric weight update and achieves nearly software comparable training accuracy at the algorithm level. In this paper, we further present the circuit-level benchmark results of this hybrid precision synapse in terms of area, latency, and energy. The corresponding array architecture is presented and the array-level operations are illustrated. The benchmark is conducted by multilayer-perceptron (MLP) + NeuroSim framework with comparison to other capacitor-assisted (e.g., 3T1C + 2PCM) hybrid precision cell. The design tradeoffs and scalability are discussed between different implementations.

Original languageEnglish (US)
Article number8746639
Pages (from-to)142-150
Number of pages9
JournalIEEE Journal on Exploratory Solid-State Computational Devices and Circuits
Volume5
Issue number2
DOIs
StatePublished - Dec 2019
Externally publishedYes

Keywords

  • Benchmark
  • ferroelectric transistor (FeFET)
  • in-memory computing
  • neural network
  • synaptic device

ASJC Scopus subject areas

  • Electronic, Optical and Magnetic Materials
  • Hardware and Architecture
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Benchmark of Ferroelectric Transistor-Based Hybrid Precision Synapse for Neural Network Accelerator'. Together they form a unique fingerprint.

Cite this