5 Citations (Scopus)

Abstract

Construction of robust and accurate deep neural networks (DNNs) is a computationally demanding and time-consuming process. Such networks also end up being memory intensive. Today, there is ever-increasing need to provide proactive and personalized support for users of smart devices. We could provide better personalization if we have the ability to update/train the DNN on edge devices. Also, by moving some computation to the edge of the cloud infrastructure, we could reduce the load on computing clusters, thus providing them more resources to handle core and complex tasks. To this end, weight pruning for DNNs has been proposed to reduce their storage footprint by an order of magnitude. However, it is yet unclear as to how to update/re-train DNNs once they are deployed on mobile devices. In this paper, we introduce the concept of re-training of pruned networks that should aid personalization of smart devices as well as increase their fault tolerance. We assume that the data used to re-train the pruned network comes from a distribution similar to what used in the original, unpruned network. We propose various strategies for pruning and re-training the networks and show that we may obtain a significant improvement on the new data while minimizing the reduction in performance on the original data.

Original languageEnglish (US)
Title of host publicationProceedings - 2017 IEEE 1st International Conference on Edge Computing, EDGE 2017
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages244-247
Number of pages4
ISBN (Electronic)9781538620175
DOIs
StatePublished - Sep 7 2017
Event1st IEEE International Conference on Edge Computing, EDGE 2017 - Honolulu, United States
Duration: Jun 25 2017Jun 30 2017

Other

Other1st IEEE International Conference on Edge Computing, EDGE 2017
CountryUnited States
CityHonolulu
Period6/25/176/30/17

Fingerprint

Neural networks
Cluster computing
Fault tolerance
Mobile devices
Data storage equipment
Deep neural networks

Keywords

  • Deep neural network
  • Edge computing
  • Personalization
  • Weight pruning

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Hardware and Architecture

Cite this

Chandakkar, P. S., Li, Y., Ding, P. L. K., & Li, B. (2017). Strategies for Re-Training a Pruned Neural Network in an Edge Computing Paradigm. In Proceedings - 2017 IEEE 1st International Conference on Edge Computing, EDGE 2017 (pp. 244-247). [8029286] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/IEEE.EDGE.2017.45

Strategies for Re-Training a Pruned Neural Network in an Edge Computing Paradigm. / Chandakkar, Parag S.; Li, Yikang; Ding, Pak Lun Kevin; Li, Baoxin.

Proceedings - 2017 IEEE 1st International Conference on Edge Computing, EDGE 2017. Institute of Electrical and Electronics Engineers Inc., 2017. p. 244-247 8029286.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Chandakkar, PS, Li, Y, Ding, PLK & Li, B 2017, Strategies for Re-Training a Pruned Neural Network in an Edge Computing Paradigm. in Proceedings - 2017 IEEE 1st International Conference on Edge Computing, EDGE 2017., 8029286, Institute of Electrical and Electronics Engineers Inc., pp. 244-247, 1st IEEE International Conference on Edge Computing, EDGE 2017, Honolulu, United States, 6/25/17. https://doi.org/10.1109/IEEE.EDGE.2017.45
Chandakkar PS, Li Y, Ding PLK, Li B. Strategies for Re-Training a Pruned Neural Network in an Edge Computing Paradigm. In Proceedings - 2017 IEEE 1st International Conference on Edge Computing, EDGE 2017. Institute of Electrical and Electronics Engineers Inc. 2017. p. 244-247. 8029286 https://doi.org/10.1109/IEEE.EDGE.2017.45
Chandakkar, Parag S. ; Li, Yikang ; Ding, Pak Lun Kevin ; Li, Baoxin. / Strategies for Re-Training a Pruned Neural Network in an Edge Computing Paradigm. Proceedings - 2017 IEEE 1st International Conference on Edge Computing, EDGE 2017. Institute of Electrical and Electronics Engineers Inc., 2017. pp. 244-247
@inproceedings{eeb35f4113d2453ca6fa715a6a4775ab,
title = "Strategies for Re-Training a Pruned Neural Network in an Edge Computing Paradigm",
abstract = "Construction of robust and accurate deep neural networks (DNNs) is a computationally demanding and time-consuming process. Such networks also end up being memory intensive. Today, there is ever-increasing need to provide proactive and personalized support for users of smart devices. We could provide better personalization if we have the ability to update/train the DNN on edge devices. Also, by moving some computation to the edge of the cloud infrastructure, we could reduce the load on computing clusters, thus providing them more resources to handle core and complex tasks. To this end, weight pruning for DNNs has been proposed to reduce their storage footprint by an order of magnitude. However, it is yet unclear as to how to update/re-train DNNs once they are deployed on mobile devices. In this paper, we introduce the concept of re-training of pruned networks that should aid personalization of smart devices as well as increase their fault tolerance. We assume that the data used to re-train the pruned network comes from a distribution similar to what used in the original, unpruned network. We propose various strategies for pruning and re-training the networks and show that we may obtain a significant improvement on the new data while minimizing the reduction in performance on the original data.",
keywords = "Deep neural network, Edge computing, Personalization, Weight pruning",
author = "Chandakkar, {Parag S.} and Yikang Li and Ding, {Pak Lun Kevin} and Baoxin Li",
year = "2017",
month = "9",
day = "7",
doi = "10.1109/IEEE.EDGE.2017.45",
language = "English (US)",
pages = "244--247",
booktitle = "Proceedings - 2017 IEEE 1st International Conference on Edge Computing, EDGE 2017",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
address = "United States",

}

TY - GEN

T1 - Strategies for Re-Training a Pruned Neural Network in an Edge Computing Paradigm

AU - Chandakkar, Parag S.

AU - Li, Yikang

AU - Ding, Pak Lun Kevin

AU - Li, Baoxin

PY - 2017/9/7

Y1 - 2017/9/7

N2 - Construction of robust and accurate deep neural networks (DNNs) is a computationally demanding and time-consuming process. Such networks also end up being memory intensive. Today, there is ever-increasing need to provide proactive and personalized support for users of smart devices. We could provide better personalization if we have the ability to update/train the DNN on edge devices. Also, by moving some computation to the edge of the cloud infrastructure, we could reduce the load on computing clusters, thus providing them more resources to handle core and complex tasks. To this end, weight pruning for DNNs has been proposed to reduce their storage footprint by an order of magnitude. However, it is yet unclear as to how to update/re-train DNNs once they are deployed on mobile devices. In this paper, we introduce the concept of re-training of pruned networks that should aid personalization of smart devices as well as increase their fault tolerance. We assume that the data used to re-train the pruned network comes from a distribution similar to what used in the original, unpruned network. We propose various strategies for pruning and re-training the networks and show that we may obtain a significant improvement on the new data while minimizing the reduction in performance on the original data.

AB - Construction of robust and accurate deep neural networks (DNNs) is a computationally demanding and time-consuming process. Such networks also end up being memory intensive. Today, there is ever-increasing need to provide proactive and personalized support for users of smart devices. We could provide better personalization if we have the ability to update/train the DNN on edge devices. Also, by moving some computation to the edge of the cloud infrastructure, we could reduce the load on computing clusters, thus providing them more resources to handle core and complex tasks. To this end, weight pruning for DNNs has been proposed to reduce their storage footprint by an order of magnitude. However, it is yet unclear as to how to update/re-train DNNs once they are deployed on mobile devices. In this paper, we introduce the concept of re-training of pruned networks that should aid personalization of smart devices as well as increase their fault tolerance. We assume that the data used to re-train the pruned network comes from a distribution similar to what used in the original, unpruned network. We propose various strategies for pruning and re-training the networks and show that we may obtain a significant improvement on the new data while minimizing the reduction in performance on the original data.

KW - Deep neural network

KW - Edge computing

KW - Personalization

KW - Weight pruning

UR - http://www.scopus.com/inward/record.url?scp=85032290937&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85032290937&partnerID=8YFLogxK

U2 - 10.1109/IEEE.EDGE.2017.45

DO - 10.1109/IEEE.EDGE.2017.45

M3 - Conference contribution

SP - 244

EP - 247

BT - Proceedings - 2017 IEEE 1st International Conference on Edge Computing, EDGE 2017

PB - Institute of Electrical and Electronics Engineers Inc.

ER -