TY - GEN
T1 - Accelerating distributed online meta-learning via multi-agent collaboration under limited communication
AU - Lin, Sen
AU - Dedeoglu, Mehmet
AU - Zhang, Junshan
N1 - Funding Information:
This work is supported in part by NSF Grants CNS-2003081, CPS-1739344 and SaTC-1618768.
Publisher Copyright:
© 2021 ACM.
PY - 2021/7/26
Y1 - 2021/7/26
N2 - Online meta-learning is emerging as an enabling technique for achieving edge intelligence in the IoT ecosystem. Nevertheless, to learn a good meta-model for within-task fast adaptation, a single agent alone has to learn over many tasks, and this is the so-called 'cold-start' problem. Observing that in a multi-agent network the learning tasks across different agents often share some model similarity, we ask the following fundamental question: "Is it possible to accelerate the online meta-learning across agents via limited communication and if yes how much benefit can be achieved?"To answer this question, we propose a multi-agent online meta-learning framework and cast it as an equivalent two-level nested online convex optimization (OCO) problem. By characterizing the upper bound of the agent-task-averaged regret, we show that the performance of multi-agent online meta-learning depends heavily on how much an agent can benefit from the distributed network-level OCO for meta-model updates via limited communication, which however is not well understood. To tackle this challenge, we devise a distributed online gradient descent algorithm with gradient tracking where each agent tracks the global gradient using only one communication step with its neighbors per iteration, and it results in an average regret O(T/N) per agent, indicating that a factor of 1/N speedup over the optimal single-agent regret O(T) after T iterations, where N is the number of agents. Building on this sharp performance speedup, we next develop a multi-agent online meta-learning algorithm and show that it can achieve the optimal task-average regret at a faster rate of O(1 N/T) via limited communication, compared to single-agent online meta-learning. Extensive experiments corroborate the theoretic results.
AB - Online meta-learning is emerging as an enabling technique for achieving edge intelligence in the IoT ecosystem. Nevertheless, to learn a good meta-model for within-task fast adaptation, a single agent alone has to learn over many tasks, and this is the so-called 'cold-start' problem. Observing that in a multi-agent network the learning tasks across different agents often share some model similarity, we ask the following fundamental question: "Is it possible to accelerate the online meta-learning across agents via limited communication and if yes how much benefit can be achieved?"To answer this question, we propose a multi-agent online meta-learning framework and cast it as an equivalent two-level nested online convex optimization (OCO) problem. By characterizing the upper bound of the agent-task-averaged regret, we show that the performance of multi-agent online meta-learning depends heavily on how much an agent can benefit from the distributed network-level OCO for meta-model updates via limited communication, which however is not well understood. To tackle this challenge, we devise a distributed online gradient descent algorithm with gradient tracking where each agent tracks the global gradient using only one communication step with its neighbors per iteration, and it results in an average regret O(T/N) per agent, indicating that a factor of 1/N speedup over the optimal single-agent regret O(T) after T iterations, where N is the number of agents. Building on this sharp performance speedup, we next develop a multi-agent online meta-learning algorithm and show that it can achieve the optimal task-average regret at a faster rate of O(1 N/T) via limited communication, compared to single-agent online meta-learning. Extensive experiments corroborate the theoretic results.
KW - Distributed online convex optimization
KW - Gradient tracking
KW - Multi-agent network
KW - Online meta-learning
UR - http://www.scopus.com/inward/record.url?scp=85121663656&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85121663656&partnerID=8YFLogxK
U2 - 10.1145/3466772.3467055
DO - 10.1145/3466772.3467055
M3 - Conference contribution
AN - SCOPUS:85121663656
T3 - Proceedings of the International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc)
SP - 261
EP - 270
BT - MobiHoc 2021 - Proceedings of the 2021 22nd International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing
PB - Association for Computing Machinery
T2 - 22nd International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing, MobiHoc 2021
Y2 - 26 July 2021 through 29 July 2021
ER -