Construction of robust and accurate deep neural networks (DNNs) is a computationally demanding and time-consuming process. Such networks also end up being memory intensive. Today, there is ever-increasing need to provide proactive and personalized support for users of smart devices. We could provide better personalization if we have the ability to update/train the DNN on edge devices. Also, by moving some computation to the edge of the cloud infrastructure, we could reduce the load on computing clusters, thus providing them more resources to handle core and complex tasks. To this end, weight pruning for DNNs has been proposed to reduce their storage footprint by an order of magnitude. However, it is yet unclear as to how to update/re-train DNNs once they are deployed on mobile devices. In this paper, we introduce the concept of re-training of pruned networks that should aid personalization of smart devices as well as increase their fault tolerance. We assume that the data used to re-train the pruned network comes from a distribution similar to what used in the original, unpruned network. We propose various strategies for pruning and re-training the networks and show that we may obtain a significant improvement on the new data while minimizing the reduction in performance on the original data.