Are existing knowledge transfer techniques effective for deep learning on edge devices?

Ragini Sharma, Saman Biookaghazadeh, Ming Zhao

Research output: Chapter in Book/Report/Conference proceedingConference contribution

4 Scopus citations

Abstract

With the emergence of edge computing paradigm, many applications such as image recognition and augmented reality require to perform machine learning (ML) and artificial intelligence (AI) tasks on edge devices. Most AI and ML models are large and computational-heavy, whereas edge devices are usually equipped with limited computational and storage resources. Such models can be compressed and reduced for deployment on edge devices, but they may loose their capability and not perform well. Recent works used knowledge transfer techniques to transfer information from a large network (termed teacher) to a small one (termed student) in order to improve the performance of the latter. This approach seems to be promising for learning on edge devices, but a thorough investigation on its effectiveness is lacking. This paper provides an extensive study on the performance (in both accuracy and convergence speed) of knowledge transfer, considering different student architectures and different techniques for transferring knowledge from teacher to student. The results show that the performance of KT does vary by architectures and transfer techniques. A good performance improvement is obtained by transferring knowledge from both the intermediate layers and last layer of the teacher to a shallower student. But other architectures and transfer techniques do not fare so well and some of them even lead to negative performance impact.

Original languageEnglish (US)
Title of host publicationHPDC 2018 - Proceedings of The 27th International Symposium on High-Performance Parallel and Distributed Computing Posters/Doctoral Consortium
PublisherAssociation for Computing Machinery, Inc
Pages15-16
Number of pages2
ISBN (Electronic)9781450358996
DOIs
StatePublished - Jun 11 2018
Event27th ACM International Symposium on High-Performance Parallel and Distributed Computing, HPDC 2018 - Tempe, United States
Duration: Jun 11 2018 → …

Other

Other27th ACM International Symposium on High-Performance Parallel and Distributed Computing, HPDC 2018
Country/TerritoryUnited States
CityTempe
Period6/11/18 → …

Keywords

  • Deep learning
  • Edge computing
  • Knowledge transfer
  • Neural networks

ASJC Scopus subject areas

  • Computer Science Applications
  • Software
  • Computational Theory and Mathematics

Fingerprint

Dive into the research topics of 'Are existing knowledge transfer techniques effective for deep learning on edge devices?'. Together they form a unique fingerprint.

Cite this