5 Citations (Scopus)

Abstract

Three-Dimensional nand flash technology is one of the most competitive integrated solutions for the high-volume massive data storage. So far, there are few investigations on how to use 3-D nand flash for in-memory computing in the neural network accelerator. In this brief, we propose using the 3-D vertical channel nand array architecture to implement the vector-matrix multiplication (VMM) with for the first time. Based on the array-level SPICE simulation, the bias condition including the selector layer and the unselected layers is optimized to achieve high computation accuracy of VMM. Since the VMM can be performed layer by layer in a 3-D nand array, the read-out latency is largely improved compared to the conventional single-cell read-out operation. The impact of device-to-device variation on the computation accuracy is also analyzed.

Original languageEnglish (US)
JournalIEEE Transactions on Very Large Scale Integration (VLSI) Systems
DOIs
StateAccepted/In press - Jan 1 2018

Fingerprint

Data storage equipment
SPICE
Particle accelerators
Neural networks

Keywords

  • 3-D nand flash
  • neural network
  • vector-matrix multiplication (VMM)
  • weighted sum.

ASJC Scopus subject areas

  • Software
  • Hardware and Architecture
  • Electrical and Electronic Engineering

Cite this

Three-Dimensional nand Flash for Vector-Matrix Multiplication. / Wang, Panni; Xu, Feng; Wang, Bo; Gao, Bin; Wu, Huaqiang; Qian, He; Yu, Shimeng.

In: IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 01.01.2018.

Research output: Contribution to journalArticle

Wang, Panni ; Xu, Feng ; Wang, Bo ; Gao, Bin ; Wu, Huaqiang ; Qian, He ; Yu, Shimeng. / Three-Dimensional nand Flash for Vector-Matrix Multiplication. In: IEEE Transactions on Very Large Scale Integration (VLSI) Systems. 2018.
@article{3f4161d3f82e49b4907ce330a0851664,
title = "Three-Dimensional nand Flash for Vector-Matrix Multiplication",
abstract = "Three-Dimensional nand flash technology is one of the most competitive integrated solutions for the high-volume massive data storage. So far, there are few investigations on how to use 3-D nand flash for in-memory computing in the neural network accelerator. In this brief, we propose using the 3-D vertical channel nand array architecture to implement the vector-matrix multiplication (VMM) with for the first time. Based on the array-level SPICE simulation, the bias condition including the selector layer and the unselected layers is optimized to achieve high computation accuracy of VMM. Since the VMM can be performed layer by layer in a 3-D nand array, the read-out latency is largely improved compared to the conventional single-cell read-out operation. The impact of device-to-device variation on the computation accuracy is also analyzed.",
keywords = "3-D nand flash, neural network, vector-matrix multiplication (VMM), weighted sum.",
author = "Panni Wang and Feng Xu and Bo Wang and Bin Gao and Huaqiang Wu and He Qian and Shimeng Yu",
year = "2018",
month = "1",
day = "1",
doi = "10.1109/TVLSI.2018.2882194",
language = "English (US)",
journal = "IEEE Transactions on Very Large Scale Integration (VLSI) Systems",
issn = "1063-8210",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - JOUR

T1 - Three-Dimensional nand Flash for Vector-Matrix Multiplication

AU - Wang, Panni

AU - Xu, Feng

AU - Wang, Bo

AU - Gao, Bin

AU - Wu, Huaqiang

AU - Qian, He

AU - Yu, Shimeng

PY - 2018/1/1

Y1 - 2018/1/1

N2 - Three-Dimensional nand flash technology is one of the most competitive integrated solutions for the high-volume massive data storage. So far, there are few investigations on how to use 3-D nand flash for in-memory computing in the neural network accelerator. In this brief, we propose using the 3-D vertical channel nand array architecture to implement the vector-matrix multiplication (VMM) with for the first time. Based on the array-level SPICE simulation, the bias condition including the selector layer and the unselected layers is optimized to achieve high computation accuracy of VMM. Since the VMM can be performed layer by layer in a 3-D nand array, the read-out latency is largely improved compared to the conventional single-cell read-out operation. The impact of device-to-device variation on the computation accuracy is also analyzed.

AB - Three-Dimensional nand flash technology is one of the most competitive integrated solutions for the high-volume massive data storage. So far, there are few investigations on how to use 3-D nand flash for in-memory computing in the neural network accelerator. In this brief, we propose using the 3-D vertical channel nand array architecture to implement the vector-matrix multiplication (VMM) with for the first time. Based on the array-level SPICE simulation, the bias condition including the selector layer and the unselected layers is optimized to achieve high computation accuracy of VMM. Since the VMM can be performed layer by layer in a 3-D nand array, the read-out latency is largely improved compared to the conventional single-cell read-out operation. The impact of device-to-device variation on the computation accuracy is also analyzed.

KW - 3-D nand flash

KW - neural network

KW - vector-matrix multiplication (VMM)

KW - weighted sum.

UR - http://www.scopus.com/inward/record.url?scp=85058146247&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85058146247&partnerID=8YFLogxK

U2 - 10.1109/TVLSI.2018.2882194

DO - 10.1109/TVLSI.2018.2882194

M3 - Article

AN - SCOPUS:85058146247

JO - IEEE Transactions on Very Large Scale Integration (VLSI) Systems

JF - IEEE Transactions on Very Large Scale Integration (VLSI) Systems

SN - 1063-8210

ER -