Abstract
With the emergence of edge computing paradigm, many applications such as image recognition and augmented reality require to perform machine learning (ML) and artificial intelligence (AI) tasks on edge devices. Most AI and ML models are large and computational-heavy, whereas edge devices are usually equipped with limited computational and storage resources. Such models can be compressed and reduced for deployment on edge devices, but they may loose their capability and not perform well. Recent works used knowledge transfer techniques to transfer information from a large network (termed teacher) to a small one (termed student) in order to improve the performance of the latter. This approach seems to be promising for learning on edge devices, but a thorough investigation on its effectiveness is lacking. This paper provides an extensive study on the performance (in both accuracy and convergence speed) of knowledge transfer, considering different student architectures and different techniques for transferring knowledge from teacher to student. The results show that the performance of KT does vary by architectures and transfer techniques. A good performance improvement is obtained by transferring knowledge from both the intermediate layers and last layer of the teacher to a shallower student. But other architectures and transfer techniques do not fare so well and some of them even lead to negative performance impact.
Original language | English (US) |
---|---|
Title of host publication | HPDC 2018 - Proceedings of The 27th International Symposium on High-Performance Parallel and Distributed Computing Posters/Doctoral Consortium |
Publisher | Association for Computing Machinery, Inc |
Pages | 15-16 |
Number of pages | 2 |
ISBN (Electronic) | 9781450358996 |
DOIs | |
State | Published - Jun 11 2018 |
Event | 27th ACM International Symposium on High-Performance Parallel and Distributed Computing, HPDC 2018 - Tempe, United States Duration: Jun 11 2018 → … |
Other
Other | 27th ACM International Symposium on High-Performance Parallel and Distributed Computing, HPDC 2018 |
---|---|
Country/Territory | United States |
City | Tempe |
Period | 6/11/18 → … |
Keywords
- Deep learning
- Edge computing
- Knowledge transfer
- Neural networks
ASJC Scopus subject areas
- Computer Science Applications
- Software
- Computational Theory and Mathematics