Abstract

Deep neural networks are typically represented by a much larger number of parameters than shallow models, making them prohibitive for small footprint devices. Recent research shows that there is considerable redundancy in the parameter space of deep neural networks. In this paper, we propose a method to compress deep neural networks by using the Fisher Information metric, which we estimate through a stochastic optimization method that keeps track of second-order information in the network. We first remove unimportant parameters and then use non-uniform fixed point quantization to assign more bits to parameters with higher Fisher Information estimates. We evaluate our method on a classification task with a convolutional neural network trained on the MNIST data set. Experimental results show that our method outperforms existing methods for both network pruning and quantization.

Original languageEnglish (US)
Title of host publicationProceedings - IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2016
PublisherIEEE Computer Society
Pages93-98
Number of pages6
Volume2016-September
ISBN (Electronic)9781467390385
DOIs
StatePublished - Sep 2 2016
Event15th IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2016 - Pittsburgh, United States
Duration: Jul 11 2016Jul 13 2016

Other

Other15th IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2016
CountryUnited States
CityPittsburgh
Period7/11/167/13/16

Fingerprint

Information theory
Redundancy
Neural networks
Deep neural networks

ASJC Scopus subject areas

  • Hardware and Architecture
  • Control and Systems Engineering
  • Electrical and Electronic Engineering

Cite this

Tu, M., Berisha, V., Cao, Y., & Seo, J. (2016). Reducing the model order of deep neural networks using information theory. In Proceedings - IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2016 (Vol. 2016-September, pp. 93-98). [7560179] IEEE Computer Society. https://doi.org/10.1109/ISVLSI.2016.117

Reducing the model order of deep neural networks using information theory. / Tu, Ming; Berisha, Visar; Cao, Yu; Seo, Jae-sun.

Proceedings - IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2016. Vol. 2016-September IEEE Computer Society, 2016. p. 93-98 7560179.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Tu, M, Berisha, V, Cao, Y & Seo, J 2016, Reducing the model order of deep neural networks using information theory. in Proceedings - IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2016. vol. 2016-September, 7560179, IEEE Computer Society, pp. 93-98, 15th IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2016, Pittsburgh, United States, 7/11/16. https://doi.org/10.1109/ISVLSI.2016.117
Tu M, Berisha V, Cao Y, Seo J. Reducing the model order of deep neural networks using information theory. In Proceedings - IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2016. Vol. 2016-September. IEEE Computer Society. 2016. p. 93-98. 7560179 https://doi.org/10.1109/ISVLSI.2016.117
Tu, Ming ; Berisha, Visar ; Cao, Yu ; Seo, Jae-sun. / Reducing the model order of deep neural networks using information theory. Proceedings - IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2016. Vol. 2016-September IEEE Computer Society, 2016. pp. 93-98
@inproceedings{02affbca454c4593989cfb269fe9a647,
title = "Reducing the model order of deep neural networks using information theory",
abstract = "Deep neural networks are typically represented by a much larger number of parameters than shallow models, making them prohibitive for small footprint devices. Recent research shows that there is considerable redundancy in the parameter space of deep neural networks. In this paper, we propose a method to compress deep neural networks by using the Fisher Information metric, which we estimate through a stochastic optimization method that keeps track of second-order information in the network. We first remove unimportant parameters and then use non-uniform fixed point quantization to assign more bits to parameters with higher Fisher Information estimates. We evaluate our method on a classification task with a convolutional neural network trained on the MNIST data set. Experimental results show that our method outperforms existing methods for both network pruning and quantization.",
author = "Ming Tu and Visar Berisha and Yu Cao and Jae-sun Seo",
year = "2016",
month = "9",
day = "2",
doi = "10.1109/ISVLSI.2016.117",
language = "English (US)",
volume = "2016-September",
pages = "93--98",
booktitle = "Proceedings - IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2016",
publisher = "IEEE Computer Society",
address = "United States",

}

TY - GEN

T1 - Reducing the model order of deep neural networks using information theory

AU - Tu, Ming

AU - Berisha, Visar

AU - Cao, Yu

AU - Seo, Jae-sun

PY - 2016/9/2

Y1 - 2016/9/2

N2 - Deep neural networks are typically represented by a much larger number of parameters than shallow models, making them prohibitive for small footprint devices. Recent research shows that there is considerable redundancy in the parameter space of deep neural networks. In this paper, we propose a method to compress deep neural networks by using the Fisher Information metric, which we estimate through a stochastic optimization method that keeps track of second-order information in the network. We first remove unimportant parameters and then use non-uniform fixed point quantization to assign more bits to parameters with higher Fisher Information estimates. We evaluate our method on a classification task with a convolutional neural network trained on the MNIST data set. Experimental results show that our method outperforms existing methods for both network pruning and quantization.

AB - Deep neural networks are typically represented by a much larger number of parameters than shallow models, making them prohibitive for small footprint devices. Recent research shows that there is considerable redundancy in the parameter space of deep neural networks. In this paper, we propose a method to compress deep neural networks by using the Fisher Information metric, which we estimate through a stochastic optimization method that keeps track of second-order information in the network. We first remove unimportant parameters and then use non-uniform fixed point quantization to assign more bits to parameters with higher Fisher Information estimates. We evaluate our method on a classification task with a convolutional neural network trained on the MNIST data set. Experimental results show that our method outperforms existing methods for both network pruning and quantization.

UR - http://www.scopus.com/inward/record.url?scp=84988929256&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84988929256&partnerID=8YFLogxK

U2 - 10.1109/ISVLSI.2016.117

DO - 10.1109/ISVLSI.2016.117

M3 - Conference contribution

AN - SCOPUS:84988929256

VL - 2016-September

SP - 93

EP - 98

BT - Proceedings - IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2016

PB - IEEE Computer Society

ER -