TY - GEN
T1 - Strategies for Re-Training a Pruned Neural Network in an Edge Computing Paradigm
AU - Chandakkar, Parag S.
AU - Li, Yikang
AU - Ding, Pak Lun Kevin
AU - Li, Baoxin
N1 - Funding Information:
Acknowledgment: The work was supported in part by a grant from ONR. Any opinions and views expressed in this material are those of the author(s) and do not necessarily reflect the view of ONR.
Publisher Copyright:
© 2017 IEEE.
PY - 2017/9/7
Y1 - 2017/9/7
N2 - Construction of robust and accurate deep neural networks (DNNs) is a computationally demanding and time-consuming process. Such networks also end up being memory intensive. Today, there is ever-increasing need to provide proactive and personalized support for users of smart devices. We could provide better personalization if we have the ability to update/train the DNN on edge devices. Also, by moving some computation to the edge of the cloud infrastructure, we could reduce the load on computing clusters, thus providing them more resources to handle core and complex tasks. To this end, weight pruning for DNNs has been proposed to reduce their storage footprint by an order of magnitude. However, it is yet unclear as to how to update/re-train DNNs once they are deployed on mobile devices. In this paper, we introduce the concept of re-training of pruned networks that should aid personalization of smart devices as well as increase their fault tolerance. We assume that the data used to re-train the pruned network comes from a distribution similar to what used in the original, unpruned network. We propose various strategies for pruning and re-training the networks and show that we may obtain a significant improvement on the new data while minimizing the reduction in performance on the original data.
AB - Construction of robust and accurate deep neural networks (DNNs) is a computationally demanding and time-consuming process. Such networks also end up being memory intensive. Today, there is ever-increasing need to provide proactive and personalized support for users of smart devices. We could provide better personalization if we have the ability to update/train the DNN on edge devices. Also, by moving some computation to the edge of the cloud infrastructure, we could reduce the load on computing clusters, thus providing them more resources to handle core and complex tasks. To this end, weight pruning for DNNs has been proposed to reduce their storage footprint by an order of magnitude. However, it is yet unclear as to how to update/re-train DNNs once they are deployed on mobile devices. In this paper, we introduce the concept of re-training of pruned networks that should aid personalization of smart devices as well as increase their fault tolerance. We assume that the data used to re-train the pruned network comes from a distribution similar to what used in the original, unpruned network. We propose various strategies for pruning and re-training the networks and show that we may obtain a significant improvement on the new data while minimizing the reduction in performance on the original data.
KW - Deep neural network
KW - Edge computing
KW - Personalization
KW - Weight pruning
UR - http://www.scopus.com/inward/record.url?scp=85032290937&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85032290937&partnerID=8YFLogxK
U2 - 10.1109/IEEE.EDGE.2017.45
DO - 10.1109/IEEE.EDGE.2017.45
M3 - Conference contribution
AN - SCOPUS:85032290937
T3 - Proceedings - 2017 IEEE 1st International Conference on Edge Computing, EDGE 2017
SP - 244
EP - 247
BT - Proceedings - 2017 IEEE 1st International Conference on Edge Computing, EDGE 2017
A2 - Goscinski, Andrzej M
A2 - Luo, Min
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 1st IEEE International Conference on Edge Computing, EDGE 2017
Y2 - 25 June 2017 through 30 June 2017
ER -