A Push-Pull Gradient Method for Distributed Optimization in Networks

Shi Pu, Wei Shi, Jinming Xu, Angelia Nedich

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this paper, we focus on solving a distributed convex optimization problem in a network, where each agent has its own convex cost function and the goal is to minimize the sum of the agents' cost functions while obeying the network connectivity structure. In order to minimize the sum of the cost functions, we consider a new distributed gradient-based method where each node maintains two estimates, namely, an estimate of the optimal decision variable and an estimate of the gradient for the average of the agents' objective functions. From the viewpoint of an agent, the information about the decision variable is pushed to the neighbors, while the information about the gradients is pulled from the neighbors (hence giving the name 'push-pull gradient method'). The method unifies the algorithms with different types of distributed architecture, including decentralized (peer-to-peer), centralized (master-slave), and semi-centralized (leader-follower) architecture. We show that the algorithm converges linearly for strongly convex and smooth objective functions over a directed static network. In our numerical test, the algorithm performs well even for time-varying directed networks.

Original languageEnglish (US)
Title of host publication2018 IEEE Conference on Decision and Control, CDC 2018
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages3385-3390
Number of pages6
ISBN (Electronic)9781538613955
DOIs
StatePublished - Jan 18 2019
Event57th IEEE Conference on Decision and Control, CDC 2018 - Miami, United States
Duration: Dec 17 2018Dec 19 2018

Publication series

NameProceedings of the IEEE Conference on Decision and Control
Volume2018-December
ISSN (Print)0743-1546

Conference

Conference57th IEEE Conference on Decision and Control, CDC 2018
CountryUnited States
CityMiami
Period12/17/1812/19/18

Fingerprint

Distributed Optimization
Gradient methods
Gradient Method
Cost functions
Cost Function
Gradient
Objective function
Estimate
Minimise
Distributed Architecture
Directed Network
Network Connectivity
Convex optimization
Peer to Peer
Convex Optimization
Smooth function
Decentralized
Convex function
Time-varying
Linearly

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Modeling and Simulation
  • Control and Optimization

Cite this

Pu, S., Shi, W., Xu, J., & Nedich, A. (2019). A Push-Pull Gradient Method for Distributed Optimization in Networks. In 2018 IEEE Conference on Decision and Control, CDC 2018 (pp. 3385-3390). [8619047] (Proceedings of the IEEE Conference on Decision and Control; Vol. 2018-December). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/CDC.2018.8619047

A Push-Pull Gradient Method for Distributed Optimization in Networks. / Pu, Shi; Shi, Wei; Xu, Jinming; Nedich, Angelia.

2018 IEEE Conference on Decision and Control, CDC 2018. Institute of Electrical and Electronics Engineers Inc., 2019. p. 3385-3390 8619047 (Proceedings of the IEEE Conference on Decision and Control; Vol. 2018-December).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Pu, S, Shi, W, Xu, J & Nedich, A 2019, A Push-Pull Gradient Method for Distributed Optimization in Networks. in 2018 IEEE Conference on Decision and Control, CDC 2018., 8619047, Proceedings of the IEEE Conference on Decision and Control, vol. 2018-December, Institute of Electrical and Electronics Engineers Inc., pp. 3385-3390, 57th IEEE Conference on Decision and Control, CDC 2018, Miami, United States, 12/17/18. https://doi.org/10.1109/CDC.2018.8619047
Pu S, Shi W, Xu J, Nedich A. A Push-Pull Gradient Method for Distributed Optimization in Networks. In 2018 IEEE Conference on Decision and Control, CDC 2018. Institute of Electrical and Electronics Engineers Inc. 2019. p. 3385-3390. 8619047. (Proceedings of the IEEE Conference on Decision and Control). https://doi.org/10.1109/CDC.2018.8619047
Pu, Shi ; Shi, Wei ; Xu, Jinming ; Nedich, Angelia. / A Push-Pull Gradient Method for Distributed Optimization in Networks. 2018 IEEE Conference on Decision and Control, CDC 2018. Institute of Electrical and Electronics Engineers Inc., 2019. pp. 3385-3390 (Proceedings of the IEEE Conference on Decision and Control).
@inproceedings{4f92f875d10a4829b5de747fff1f180c,
title = "A Push-Pull Gradient Method for Distributed Optimization in Networks",
abstract = "In this paper, we focus on solving a distributed convex optimization problem in a network, where each agent has its own convex cost function and the goal is to minimize the sum of the agents' cost functions while obeying the network connectivity structure. In order to minimize the sum of the cost functions, we consider a new distributed gradient-based method where each node maintains two estimates, namely, an estimate of the optimal decision variable and an estimate of the gradient for the average of the agents' objective functions. From the viewpoint of an agent, the information about the decision variable is pushed to the neighbors, while the information about the gradients is pulled from the neighbors (hence giving the name 'push-pull gradient method'). The method unifies the algorithms with different types of distributed architecture, including decentralized (peer-to-peer), centralized (master-slave), and semi-centralized (leader-follower) architecture. We show that the algorithm converges linearly for strongly convex and smooth objective functions over a directed static network. In our numerical test, the algorithm performs well even for time-varying directed networks.",
author = "Shi Pu and Wei Shi and Jinming Xu and Angelia Nedich",
year = "2019",
month = "1",
day = "18",
doi = "10.1109/CDC.2018.8619047",
language = "English (US)",
series = "Proceedings of the IEEE Conference on Decision and Control",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "3385--3390",
booktitle = "2018 IEEE Conference on Decision and Control, CDC 2018",

}

TY - GEN

T1 - A Push-Pull Gradient Method for Distributed Optimization in Networks

AU - Pu, Shi

AU - Shi, Wei

AU - Xu, Jinming

AU - Nedich, Angelia

PY - 2019/1/18

Y1 - 2019/1/18

N2 - In this paper, we focus on solving a distributed convex optimization problem in a network, where each agent has its own convex cost function and the goal is to minimize the sum of the agents' cost functions while obeying the network connectivity structure. In order to minimize the sum of the cost functions, we consider a new distributed gradient-based method where each node maintains two estimates, namely, an estimate of the optimal decision variable and an estimate of the gradient for the average of the agents' objective functions. From the viewpoint of an agent, the information about the decision variable is pushed to the neighbors, while the information about the gradients is pulled from the neighbors (hence giving the name 'push-pull gradient method'). The method unifies the algorithms with different types of distributed architecture, including decentralized (peer-to-peer), centralized (master-slave), and semi-centralized (leader-follower) architecture. We show that the algorithm converges linearly for strongly convex and smooth objective functions over a directed static network. In our numerical test, the algorithm performs well even for time-varying directed networks.

AB - In this paper, we focus on solving a distributed convex optimization problem in a network, where each agent has its own convex cost function and the goal is to minimize the sum of the agents' cost functions while obeying the network connectivity structure. In order to minimize the sum of the cost functions, we consider a new distributed gradient-based method where each node maintains two estimates, namely, an estimate of the optimal decision variable and an estimate of the gradient for the average of the agents' objective functions. From the viewpoint of an agent, the information about the decision variable is pushed to the neighbors, while the information about the gradients is pulled from the neighbors (hence giving the name 'push-pull gradient method'). The method unifies the algorithms with different types of distributed architecture, including decentralized (peer-to-peer), centralized (master-slave), and semi-centralized (leader-follower) architecture. We show that the algorithm converges linearly for strongly convex and smooth objective functions over a directed static network. In our numerical test, the algorithm performs well even for time-varying directed networks.

UR - http://www.scopus.com/inward/record.url?scp=85062168090&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85062168090&partnerID=8YFLogxK

U2 - 10.1109/CDC.2018.8619047

DO - 10.1109/CDC.2018.8619047

M3 - Conference contribution

T3 - Proceedings of the IEEE Conference on Decision and Control

SP - 3385

EP - 3390

BT - 2018 IEEE Conference on Decision and Control, CDC 2018

PB - Institute of Electrical and Electronics Engineers Inc.

ER -