Abstract
For Internet of Things (IoT) edge devices, it is very attractive to have the local sensemaking capability instead of sending all the data back to the cloud for information processing. For image pattern recognition, neuro-inspired machine learning algorithms have demonstrated enormous powerfulness. To effectively implement learning algorithms on-chip for IoT edge devices, on-chip synaptic memory architectures have been proposed to implement the key operations such as weighted-sum or matrix-vector multiplication. In this paper, we proposed a low-power design of static random access memory (SRAM) synaptic array for implementing a low-precision ternary neural network. We experimentally demonstrated that the supply voltage (VDD) of the SRAM array could be aggressively reduced to a level, where the SRAM cell is susceptible to bit failures. The testing results from 65-nm SRAM chips indicate that VDD could be reduced from the nominal 1-0.55 V (or 0.5 V) with a bit error rate ∼0.23% (or ∼1.56%), which only introduced ∼0.08% (or ∼1.68%) degradation in the classification accuracy. As a result, the power consumption could be reduced by more than 8× (or 10×).
Original language | English (US) |
---|---|
Article number | 7995135 |
Pages (from-to) | 2962-2965 |
Number of pages | 4 |
Journal | IEEE Transactions on Very Large Scale Integration (VLSI) Systems |
Volume | 25 |
Issue number | 10 |
DOIs | |
State | Published - Oct 2017 |
Keywords
- Binary synapses
- classification
- low power
- neural network
- static random access memory (SRAM)
ASJC Scopus subject areas
- Software
- Hardware and Architecture
- Electrical and Electronic Engineering