TY - GEN
T1 - A Push-Pull Gradient Method for Distributed Optimization in Networks
AU - Pu, Shi
AU - Shi, Wei
AU - Xu, Jinming
AU - Nedich, Angelia
N1 - Funding Information:
*This work was supported in parts by the NSF grant CCF-1717391 and by the ONR grant no. N00014-12-1-0998.
Publisher Copyright:
© 2018 IEEE.
PY - 2018/7/2
Y1 - 2018/7/2
N2 - In this paper, we focus on solving a distributed convex optimization problem in a network, where each agent has its own convex cost function and the goal is to minimize the sum of the agents' cost functions while obeying the network connectivity structure. In order to minimize the sum of the cost functions, we consider a new distributed gradient-based method where each node maintains two estimates, namely, an estimate of the optimal decision variable and an estimate of the gradient for the average of the agents' objective functions. From the viewpoint of an agent, the information about the decision variable is pushed to the neighbors, while the information about the gradients is pulled from the neighbors (hence giving the name 'push-pull gradient method'). The method unifies the algorithms with different types of distributed architecture, including decentralized (peer-to-peer), centralized (master-slave), and semi-centralized (leader-follower) architecture. We show that the algorithm converges linearly for strongly convex and smooth objective functions over a directed static network. In our numerical test, the algorithm performs well even for time-varying directed networks.
AB - In this paper, we focus on solving a distributed convex optimization problem in a network, where each agent has its own convex cost function and the goal is to minimize the sum of the agents' cost functions while obeying the network connectivity structure. In order to minimize the sum of the cost functions, we consider a new distributed gradient-based method where each node maintains two estimates, namely, an estimate of the optimal decision variable and an estimate of the gradient for the average of the agents' objective functions. From the viewpoint of an agent, the information about the decision variable is pushed to the neighbors, while the information about the gradients is pulled from the neighbors (hence giving the name 'push-pull gradient method'). The method unifies the algorithms with different types of distributed architecture, including decentralized (peer-to-peer), centralized (master-slave), and semi-centralized (leader-follower) architecture. We show that the algorithm converges linearly for strongly convex and smooth objective functions over a directed static network. In our numerical test, the algorithm performs well even for time-varying directed networks.
UR - http://www.scopus.com/inward/record.url?scp=85062168090&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85062168090&partnerID=8YFLogxK
U2 - 10.1109/CDC.2018.8619047
DO - 10.1109/CDC.2018.8619047
M3 - Conference contribution
AN - SCOPUS:85062168090
T3 - Proceedings of the IEEE Conference on Decision and Control
SP - 3385
EP - 3390
BT - 2018 IEEE Conference on Decision and Control, CDC 2018
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 57th IEEE Conference on Decision and Control, CDC 2018
Y2 - 17 December 2018 through 19 December 2018
ER -