TY - GEN
T1 - A hybrid approach to offloading mobile image classification
AU - Hauswald, J.
AU - Manville, T.
AU - Zheng, Q.
AU - Dreslinski, R.
AU - Chakrabarti, Chaitali
AU - Mudge, T.
PY - 2014/1/1
Y1 - 2014/1/1
N2 - Current mobile devices are unable to execute complex vision applications in a timely and power efficient manner without offloading some of the computation. This paper examines the tradeoffs that arise from executing some of the workload onboard and some remotely. Feature extraction and matching play an essential role in image classification and have the potential to be executed locally. Along with advances in mobile hardware, understanding the computation requirements of these applications is essential to realize their full potential in mobile environments. We analyze the ability of a mobile platform to execute feature extraction and matching, and prediction workloads under various scenarios. The best configuration for optimal runtime (11% faster) executes feature extraction with a GPU onboard and offloads the rest of the pipeline. Alternatively, compressing and sending the image over the network achieves lowest data transferred (2.5× better) and lowest energy usage (3.7× better) than the next best option.
AB - Current mobile devices are unable to execute complex vision applications in a timely and power efficient manner without offloading some of the computation. This paper examines the tradeoffs that arise from executing some of the workload onboard and some remotely. Feature extraction and matching play an essential role in image classification and have the potential to be executed locally. Along with advances in mobile hardware, understanding the computation requirements of these applications is essential to realize their full potential in mobile environments. We analyze the ability of a mobile platform to execute feature extraction and matching, and prediction workloads under various scenarios. The best configuration for optimal runtime (11% faster) executes feature extraction with a GPU onboard and offloads the rest of the pipeline. Alternatively, compressing and sending the image over the network achieves lowest data transferred (2.5× better) and lowest energy usage (3.7× better) than the next best option.
KW - energy management
KW - image classification
KW - mobile computing
KW - offloading
UR - http://www.scopus.com/inward/record.url?scp=84905227563&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84905227563&partnerID=8YFLogxK
U2 - 10.1109/ICASSP.2014.6855235
DO - 10.1109/ICASSP.2014.6855235
M3 - Conference contribution
AN - SCOPUS:84905227563
SN - 9781479928927
T3 - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
SP - 8375
EP - 8379
BT - 2014 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2014
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2014 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2014
Y2 - 4 May 2014 through 9 May 2014
ER -