Ferroelectric FET analog synapse for acceleration of deep neural network training

Matthew Jerry, Pai Yu Chen, Jianchi Zhang, Pankaj Sharma, Kai Ni, Shimeng Yu, Suman Datta

Research output: Chapter in Book/Report/Conference proceedingConference contribution

18 Citations (Scopus)

Abstract

The memory requirement of at-scale deep neural networks (DNN) dictate that synaptic weight values be stored and updated in off-chip memory such as DRAM, limiting the energy efficiency and training time. Monolithic cross-bar/pseudo cross-bar arrays with analog non-volatile memories capable of storing and updating weights on-chip offer the possibility of accelerating DNN training. Here, we harness the dynamics of voltage controlled partial polarization switching in ferroelectric-FETs (FeFET) to demonstrate such an analog synapse. We develop a transient Presiach model that accurately predicts minor loop trajectories and remnant polarization charge (Pr) for arbitrary pulse width, voltage, and history. We experimentally demonstrate a 5-bit FeFET synapse with symmetric potentiation and depression characteristics, and a 45x tunable range in conductance with 75ns update pulse. A circuit macro-model is used to evaluate and benchmark onchip learning performance (area, latency, energy, accuracy) of FeFET synaptic core revealing a 103 to 106 acceleration in online learning latency over multi-state RRAM based analog synapses.

Original languageEnglish (US)
Title of host publication2017 IEEE International Electron Devices Meeting, IEDM 2017
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages6.2.1-6.2.4
ISBN (Electronic)9781538635599
DOIs
StatePublished - Jan 23 2018
Event63rd IEEE International Electron Devices Meeting, IEDM 2017 - San Francisco, United States
Duration: Dec 2 2017Dec 6 2017

Other

Other63rd IEEE International Electron Devices Meeting, IEDM 2017
CountryUnited States
CitySan Francisco
Period12/2/1712/6/17

Fingerprint

synapses
Field effect transistors
Ferroelectric materials
education
field effect transistors
analogs
Data storage equipment
learning
chips
Polarization
harnesses
Dynamic random access storage
Electric potential
electric potential
polarization
Energy efficiency
Macros
pulse duration
Trajectories
trajectories

ASJC Scopus subject areas

  • Electronic, Optical and Magnetic Materials
  • Condensed Matter Physics
  • Electrical and Electronic Engineering
  • Materials Chemistry

Cite this

Jerry, M., Chen, P. Y., Zhang, J., Sharma, P., Ni, K., Yu, S., & Datta, S. (2018). Ferroelectric FET analog synapse for acceleration of deep neural network training. In 2017 IEEE International Electron Devices Meeting, IEDM 2017 (pp. 6.2.1-6.2.4). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/IEDM.2017.8268338

Ferroelectric FET analog synapse for acceleration of deep neural network training. / Jerry, Matthew; Chen, Pai Yu; Zhang, Jianchi; Sharma, Pankaj; Ni, Kai; Yu, Shimeng; Datta, Suman.

2017 IEEE International Electron Devices Meeting, IEDM 2017. Institute of Electrical and Electronics Engineers Inc., 2018. p. 6.2.1-6.2.4.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Jerry, M, Chen, PY, Zhang, J, Sharma, P, Ni, K, Yu, S & Datta, S 2018, Ferroelectric FET analog synapse for acceleration of deep neural network training. in 2017 IEEE International Electron Devices Meeting, IEDM 2017. Institute of Electrical and Electronics Engineers Inc., pp. 6.2.1-6.2.4, 63rd IEEE International Electron Devices Meeting, IEDM 2017, San Francisco, United States, 12/2/17. https://doi.org/10.1109/IEDM.2017.8268338
Jerry M, Chen PY, Zhang J, Sharma P, Ni K, Yu S et al. Ferroelectric FET analog synapse for acceleration of deep neural network training. In 2017 IEEE International Electron Devices Meeting, IEDM 2017. Institute of Electrical and Electronics Engineers Inc. 2018. p. 6.2.1-6.2.4 https://doi.org/10.1109/IEDM.2017.8268338
Jerry, Matthew ; Chen, Pai Yu ; Zhang, Jianchi ; Sharma, Pankaj ; Ni, Kai ; Yu, Shimeng ; Datta, Suman. / Ferroelectric FET analog synapse for acceleration of deep neural network training. 2017 IEEE International Electron Devices Meeting, IEDM 2017. Institute of Electrical and Electronics Engineers Inc., 2018. pp. 6.2.1-6.2.4
@inproceedings{96df0d1896ee4232bc6fb1b3d236da9e,
title = "Ferroelectric FET analog synapse for acceleration of deep neural network training",
abstract = "The memory requirement of at-scale deep neural networks (DNN) dictate that synaptic weight values be stored and updated in off-chip memory such as DRAM, limiting the energy efficiency and training time. Monolithic cross-bar/pseudo cross-bar arrays with analog non-volatile memories capable of storing and updating weights on-chip offer the possibility of accelerating DNN training. Here, we harness the dynamics of voltage controlled partial polarization switching in ferroelectric-FETs (FeFET) to demonstrate such an analog synapse. We develop a transient Presiach model that accurately predicts minor loop trajectories and remnant polarization charge (Pr) for arbitrary pulse width, voltage, and history. We experimentally demonstrate a 5-bit FeFET synapse with symmetric potentiation and depression characteristics, and a 45x tunable range in conductance with 75ns update pulse. A circuit macro-model is used to evaluate and benchmark onchip learning performance (area, latency, energy, accuracy) of FeFET synaptic core revealing a 103 to 106 acceleration in online learning latency over multi-state RRAM based analog synapses.",
author = "Matthew Jerry and Chen, {Pai Yu} and Jianchi Zhang and Pankaj Sharma and Kai Ni and Shimeng Yu and Suman Datta",
year = "2018",
month = "1",
day = "23",
doi = "10.1109/IEDM.2017.8268338",
language = "English (US)",
pages = "6.2.1--6.2.4",
booktitle = "2017 IEEE International Electron Devices Meeting, IEDM 2017",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - GEN

T1 - Ferroelectric FET analog synapse for acceleration of deep neural network training

AU - Jerry, Matthew

AU - Chen, Pai Yu

AU - Zhang, Jianchi

AU - Sharma, Pankaj

AU - Ni, Kai

AU - Yu, Shimeng

AU - Datta, Suman

PY - 2018/1/23

Y1 - 2018/1/23

N2 - The memory requirement of at-scale deep neural networks (DNN) dictate that synaptic weight values be stored and updated in off-chip memory such as DRAM, limiting the energy efficiency and training time. Monolithic cross-bar/pseudo cross-bar arrays with analog non-volatile memories capable of storing and updating weights on-chip offer the possibility of accelerating DNN training. Here, we harness the dynamics of voltage controlled partial polarization switching in ferroelectric-FETs (FeFET) to demonstrate such an analog synapse. We develop a transient Presiach model that accurately predicts minor loop trajectories and remnant polarization charge (Pr) for arbitrary pulse width, voltage, and history. We experimentally demonstrate a 5-bit FeFET synapse with symmetric potentiation and depression characteristics, and a 45x tunable range in conductance with 75ns update pulse. A circuit macro-model is used to evaluate and benchmark onchip learning performance (area, latency, energy, accuracy) of FeFET synaptic core revealing a 103 to 106 acceleration in online learning latency over multi-state RRAM based analog synapses.

AB - The memory requirement of at-scale deep neural networks (DNN) dictate that synaptic weight values be stored and updated in off-chip memory such as DRAM, limiting the energy efficiency and training time. Monolithic cross-bar/pseudo cross-bar arrays with analog non-volatile memories capable of storing and updating weights on-chip offer the possibility of accelerating DNN training. Here, we harness the dynamics of voltage controlled partial polarization switching in ferroelectric-FETs (FeFET) to demonstrate such an analog synapse. We develop a transient Presiach model that accurately predicts minor loop trajectories and remnant polarization charge (Pr) for arbitrary pulse width, voltage, and history. We experimentally demonstrate a 5-bit FeFET synapse with symmetric potentiation and depression characteristics, and a 45x tunable range in conductance with 75ns update pulse. A circuit macro-model is used to evaluate and benchmark onchip learning performance (area, latency, energy, accuracy) of FeFET synaptic core revealing a 103 to 106 acceleration in online learning latency over multi-state RRAM based analog synapses.

UR - http://www.scopus.com/inward/record.url?scp=85045181722&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85045181722&partnerID=8YFLogxK

U2 - 10.1109/IEDM.2017.8268338

DO - 10.1109/IEDM.2017.8268338

M3 - Conference contribution

AN - SCOPUS:85045181722

SP - 6.2.1-6.2.4

BT - 2017 IEEE International Electron Devices Meeting, IEDM 2017

PB - Institute of Electrical and Electronics Engineers Inc.

ER -