TY - GEN
T1 - Inexact-ADMM based federated meta-learning for fast and continual edge learning
AU - Yue, Sheng
AU - Ren, Ju
AU - Xin, Jiang
AU - Lin, Sen
AU - Zhang, Junshan
N1 - Funding Information:
under Grant No. 2019YFA0706403, National Natural Science Foundation of China under Grants No. 62072472, 61702562 and U19A2067, Natural Science Foundation of Hunan Province, China under Grant No. 2020JJ2050, 111 Project under Grant No. B18059, the Young Elite Scientists Sponsorship Program by CAST under Grant No. 2018QNRC001, the Young Talents Plan of Hunan Province of China under Grant No. 2019RS2001, and also financially supported by China Scholarship Council (CSC).
Funding Information:
This research was supported in part by NSF under Grants CNS- 2003081 and CPS-1739344, National Key RandD Program of China under Grant No. 2019YFA0706403, National Natural Science Foundation of China under Grants No. 62072472, 61702562 and U19A2067, Natural Science Foundation of Hunan Province, China under Grant No. 2020JJ2050.
Funding Information:
This research was supported in part by NSF under Grants CNS-2003081 and CPS-1739344, National Key R&D Program of China
Publisher Copyright:
© 2021 ACM.
PY - 2021/7/26
Y1 - 2021/7/26
N2 - In order to meet the requirements for performance, safety, and latency in many IoT applications, intelligent decisions must be made right here right now at the network edge. However, the constrained resources and limited local data amount pose significant challenges to the development of edge AI. To overcome these challenges, we explore continual edge learning capable of leveraging the knowledge transfer from previous tasks. Aiming to achieve fast and continual edge learning, we propose a platform-aided federated meta-learning architecture where edge nodes collaboratively learn a meta-model, aided by the knowledge transfer from prior tasks. The edge learning problem is cast as a regularized optimization problem, where the valuable knowledge learned from previous tasks is extracted as regularization. Then, we devise an ADMM based federated meta-learning algorithm, namely ADMM-FedMeta, where ADMM offers a natural mechanism to decompose the original problem into many subproblems which can be solved in parallel across edge nodes and the platform. Further, a variant of inexact-ADMM method is employed where the subproblems are 'solved' via linear approximation as well as Hessian estimation to reduce the computational cost per round to O(n). We provide a comprehensive analysis of ADMM-FedMeta, in terms of the convergence properties, the rapid adaptation performance, and the forgetting effect of prior knowledge transfer, for the general non-convex case. Extensive experimental studies demonstrate the effectiveness and efficiency of ADMM-FedMeta, and showcase that it substantially outperforms the existing baselines.
AB - In order to meet the requirements for performance, safety, and latency in many IoT applications, intelligent decisions must be made right here right now at the network edge. However, the constrained resources and limited local data amount pose significant challenges to the development of edge AI. To overcome these challenges, we explore continual edge learning capable of leveraging the knowledge transfer from previous tasks. Aiming to achieve fast and continual edge learning, we propose a platform-aided federated meta-learning architecture where edge nodes collaboratively learn a meta-model, aided by the knowledge transfer from prior tasks. The edge learning problem is cast as a regularized optimization problem, where the valuable knowledge learned from previous tasks is extracted as regularization. Then, we devise an ADMM based federated meta-learning algorithm, namely ADMM-FedMeta, where ADMM offers a natural mechanism to decompose the original problem into many subproblems which can be solved in parallel across edge nodes and the platform. Further, a variant of inexact-ADMM method is employed where the subproblems are 'solved' via linear approximation as well as Hessian estimation to reduce the computational cost per round to O(n). We provide a comprehensive analysis of ADMM-FedMeta, in terms of the convergence properties, the rapid adaptation performance, and the forgetting effect of prior knowledge transfer, for the general non-convex case. Extensive experimental studies demonstrate the effectiveness and efficiency of ADMM-FedMeta, and showcase that it substantially outperforms the existing baselines.
KW - ADMM
KW - Continual learning
KW - Edge intelligence
KW - Federated meta-learning
KW - Regularization
UR - http://www.scopus.com/inward/record.url?scp=85117147665&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85117147665&partnerID=8YFLogxK
U2 - 10.1145/3466772.3467038
DO - 10.1145/3466772.3467038
M3 - Conference contribution
AN - SCOPUS:85117147665
T3 - Proceedings of the International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc)
SP - 91
EP - 100
BT - MobiHoc 2021 - Proceedings of the 2021 22nd International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing
PB - Association for Computing Machinery
T2 - 22nd International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing, MobiHoc 2021
Y2 - 26 July 2021 through 29 July 2021
ER -