Abstract

Deep neural networks are typically represented by a much larger number of parameters than shallow models, making them prohibitive for small footprint devices. Recent research shows that there is considerable redundancy in the parameter space of deep neural networks. In this paper, we propose a method to compress deep neural networks by using the Fisher Information metric, which we estimate through a stochastic optimization method that keeps track of second-order information in the network. We first remove unimportant parameters and then use non-uniform fixed point quantization to assign more bits to parameters with higher Fisher Information estimates. We evaluate our method on a classification task with a convolutional neural network trained on the MNIST data set. Experimental results show that our method outperforms existing methods for both network pruning and quantization.

Original languageEnglish (US)
Title of host publicationProceedings - IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2016
PublisherIEEE Computer Society
Pages93-98
Number of pages6
ISBN (Electronic)9781467390385
DOIs
StatePublished - Sep 2 2016
Event15th IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2016 - Pittsburgh, United States
Duration: Jul 11 2016Jul 13 2016

Publication series

NameProceedings of IEEE Computer Society Annual Symposium on VLSI, ISVLSI
Volume2016-September
ISSN (Print)2159-3469
ISSN (Electronic)2159-3477

Other

Other15th IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2016
Country/TerritoryUnited States
CityPittsburgh
Period7/11/167/13/16

ASJC Scopus subject areas

  • Hardware and Architecture
  • Control and Systems Engineering
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Reducing the model order of deep neural networks using information theory'. Together they form a unique fingerprint.

Cite this