### Abstract

In this paper, we focus on solving a distributed convex optimization problem in a network, where each agent has its own convex cost function and the goal is to minimize the sum of the agents' cost functions while obeying the network connectivity structure. In order to minimize the sum of the cost functions, we consider a new distributed gradient-based method where each node maintains two estimates, namely, an estimate of the optimal decision variable and an estimate of the gradient for the average of the agents' objective functions. From the viewpoint of an agent, the information about the decision variable is pushed to the neighbors, while the information about the gradients is pulled from the neighbors (hence giving the name 'push-pull gradient method'). The method unifies the algorithms with different types of distributed architecture, including decentralized (peer-to-peer), centralized (master-slave), and semi-centralized (leader-follower) architecture. We show that the algorithm converges linearly for strongly convex and smooth objective functions over a directed static network. In our numerical test, the algorithm performs well even for time-varying directed networks.

Original language | English (US) |
---|---|

Title of host publication | 2018 IEEE Conference on Decision and Control, CDC 2018 |

Publisher | Institute of Electrical and Electronics Engineers Inc. |

Pages | 3385-3390 |

Number of pages | 6 |

ISBN (Electronic) | 9781538613955 |

DOIs | |

State | Published - Jan 18 2019 |

Event | 57th IEEE Conference on Decision and Control, CDC 2018 - Miami, United States Duration: Dec 17 2018 → Dec 19 2018 |

### Publication series

Name | Proceedings of the IEEE Conference on Decision and Control |
---|---|

Volume | 2018-December |

ISSN (Print) | 0743-1546 |

### Conference

Conference | 57th IEEE Conference on Decision and Control, CDC 2018 |
---|---|

Country | United States |

City | Miami |

Period | 12/17/18 → 12/19/18 |

### Fingerprint

### ASJC Scopus subject areas

- Control and Systems Engineering
- Modeling and Simulation
- Control and Optimization

### Cite this

*2018 IEEE Conference on Decision and Control, CDC 2018*(pp. 3385-3390). [8619047] (Proceedings of the IEEE Conference on Decision and Control; Vol. 2018-December). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/CDC.2018.8619047

**A Push-Pull Gradient Method for Distributed Optimization in Networks.** / Pu, Shi; Shi, Wei; Xu, Jinming; Nedich, Angelia.

Research output: Chapter in Book/Report/Conference proceeding › Conference contribution

*2018 IEEE Conference on Decision and Control, CDC 2018.*, 8619047, Proceedings of the IEEE Conference on Decision and Control, vol. 2018-December, Institute of Electrical and Electronics Engineers Inc., pp. 3385-3390, 57th IEEE Conference on Decision and Control, CDC 2018, Miami, United States, 12/17/18. https://doi.org/10.1109/CDC.2018.8619047

}

TY - GEN

T1 - A Push-Pull Gradient Method for Distributed Optimization in Networks

AU - Pu, Shi

AU - Shi, Wei

AU - Xu, Jinming

AU - Nedich, Angelia

PY - 2019/1/18

Y1 - 2019/1/18

N2 - In this paper, we focus on solving a distributed convex optimization problem in a network, where each agent has its own convex cost function and the goal is to minimize the sum of the agents' cost functions while obeying the network connectivity structure. In order to minimize the sum of the cost functions, we consider a new distributed gradient-based method where each node maintains two estimates, namely, an estimate of the optimal decision variable and an estimate of the gradient for the average of the agents' objective functions. From the viewpoint of an agent, the information about the decision variable is pushed to the neighbors, while the information about the gradients is pulled from the neighbors (hence giving the name 'push-pull gradient method'). The method unifies the algorithms with different types of distributed architecture, including decentralized (peer-to-peer), centralized (master-slave), and semi-centralized (leader-follower) architecture. We show that the algorithm converges linearly for strongly convex and smooth objective functions over a directed static network. In our numerical test, the algorithm performs well even for time-varying directed networks.

AB - In this paper, we focus on solving a distributed convex optimization problem in a network, where each agent has its own convex cost function and the goal is to minimize the sum of the agents' cost functions while obeying the network connectivity structure. In order to minimize the sum of the cost functions, we consider a new distributed gradient-based method where each node maintains two estimates, namely, an estimate of the optimal decision variable and an estimate of the gradient for the average of the agents' objective functions. From the viewpoint of an agent, the information about the decision variable is pushed to the neighbors, while the information about the gradients is pulled from the neighbors (hence giving the name 'push-pull gradient method'). The method unifies the algorithms with different types of distributed architecture, including decentralized (peer-to-peer), centralized (master-slave), and semi-centralized (leader-follower) architecture. We show that the algorithm converges linearly for strongly convex and smooth objective functions over a directed static network. In our numerical test, the algorithm performs well even for time-varying directed networks.

UR - http://www.scopus.com/inward/record.url?scp=85062168090&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85062168090&partnerID=8YFLogxK

U2 - 10.1109/CDC.2018.8619047

DO - 10.1109/CDC.2018.8619047

M3 - Conference contribution

T3 - Proceedings of the IEEE Conference on Decision and Control

SP - 3385

EP - 3390

BT - 2018 IEEE Conference on Decision and Control, CDC 2018

PB - Institute of Electrical and Electronics Engineers Inc.

ER -