TY - GEN
T1 - Digital CMOS neuromorphic processor design featuring unsupervised online learning
AU - Seo, Jae-sun
AU - Seok, Mingoo
PY - 2015/10/30
Y1 - 2015/10/30
N2 - The compute-intensive and power-efficient brain has been a source of inspiration for a broad range of neural networks to solve recognition and classification tasks. Compared to the supervised deep neural networks (DNNs) that have been very successful on well-defined labeled datasets, bio-plausible spiking neural networks (SNNs) with unsupervised learning rules could be well-suited for training and learning representations from the massive amount of unlabeled data. To design dense and low-power hardware for such unsupervised SNNs, we employ digital CMOS circuits for neuromorphic processors, which can exploit transistor scaling and dynamic voltage scaling to the utmost. As exemplary works, we present two neuromorphic processor designs. First, a 45nm neuromorphic chip is designed for a small-scale network of spiking neurons. Through tight integration of memory (64k SRAM synapses) and computation (256 digital neurons), the chip demonstrates on-chip learning on pattern recognition tasks down to 0.53V supply. Secondly, a 65nm neuromorphic processor that performs unsupervised on-line spike-clustering for brain sensing applications is implemented with 1.2k digital neurons and 4.7k latch-based synapses. The processor exhibits a power consumption of 9.3μW/ch at 0.3V supply. Synapse hardware precision, efficient synapse memory array access, overfitting, and voltage scaling will be discussed for dense and power-efficient on-chip learning for CMOS spiking neural networks.
AB - The compute-intensive and power-efficient brain has been a source of inspiration for a broad range of neural networks to solve recognition and classification tasks. Compared to the supervised deep neural networks (DNNs) that have been very successful on well-defined labeled datasets, bio-plausible spiking neural networks (SNNs) with unsupervised learning rules could be well-suited for training and learning representations from the massive amount of unlabeled data. To design dense and low-power hardware for such unsupervised SNNs, we employ digital CMOS circuits for neuromorphic processors, which can exploit transistor scaling and dynamic voltage scaling to the utmost. As exemplary works, we present two neuromorphic processor designs. First, a 45nm neuromorphic chip is designed for a small-scale network of spiking neurons. Through tight integration of memory (64k SRAM synapses) and computation (256 digital neurons), the chip demonstrates on-chip learning on pattern recognition tasks down to 0.53V supply. Secondly, a 65nm neuromorphic processor that performs unsupervised on-line spike-clustering for brain sensing applications is implemented with 1.2k digital neurons and 4.7k latch-based synapses. The processor exhibits a power consumption of 9.3μW/ch at 0.3V supply. Synapse hardware precision, efficient synapse memory array access, overfitting, and voltage scaling will be discussed for dense and power-efficient on-chip learning for CMOS spiking neural networks.
KW - CMOS
KW - digital circuits
KW - low-power
KW - low-voltage
KW - neuromorphic computing
KW - on-chip learning
KW - spiking neural networks
KW - unsupervised learning
UR - http://www.scopus.com/inward/record.url?scp=84960099852&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84960099852&partnerID=8YFLogxK
U2 - 10.1109/VLSI-SoC.2015.7314390
DO - 10.1109/VLSI-SoC.2015.7314390
M3 - Conference contribution
AN - SCOPUS:84960099852
T3 - IEEE/IFIP International Conference on VLSI and System-on-Chip, VLSI-SoC
SP - 49
EP - 51
BT - 2015 IFIP/IEEE International Conference on Very Large Scale Integration, VLSI-SoC 2015
PB - IEEE Computer Society
T2 - 23rd IFIP/IEEE International Conference on Very Large Scale Integration, VLSI-SoC 2015
Y2 - 5 October 2015 through 7 October 2015
ER -