### Abstract

In this paper, we study the problem of distributed multi-agent optimization over a network, where each agent possesses a local cost function that is smooth and strongly convex. The global objective is to find a common solution that minimizes the average of all cost functions. Assuming agents only have access to unbiased estimates of the gradients of their local cost functions, we consider a distributed stochastic gradient tracking method. We show that, in expectation, the iterates generated by each agent are attracted to a neighborhood of the optimal solution, where they accumulate exponentially fast (under a constant step size choice). More importantly, the limiting (expected) error bounds on the distance of the iterates from the optimal solution decrease with the network size, which is a comparable performance to a centralized stochastic gradient algorithm. Numerical examples further demonstrate the effectiveness of the method.

Original language | English (US) |
---|---|

Title of host publication | 2018 IEEE Conference on Decision and Control, CDC 2018 |

Publisher | Institute of Electrical and Electronics Engineers Inc. |

Pages | 963-968 |

Number of pages | 6 |

ISBN (Electronic) | 9781538613955 |

DOIs | |

State | Published - Jan 18 2019 |

Event | 57th IEEE Conference on Decision and Control, CDC 2018 - Miami, United States Duration: Dec 17 2018 → Dec 19 2018 |

### Publication series

Name | Proceedings of the IEEE Conference on Decision and Control |
---|---|

Volume | 2018-December |

ISSN (Print) | 0743-1546 |

### Conference

Conference | 57th IEEE Conference on Decision and Control, CDC 2018 |
---|---|

Country | United States |

City | Miami |

Period | 12/17/18 → 12/19/18 |

### Fingerprint

### ASJC Scopus subject areas

- Control and Systems Engineering
- Modeling and Simulation
- Control and Optimization

### Cite this

*2018 IEEE Conference on Decision and Control, CDC 2018*(pp. 963-968). [8618708] (Proceedings of the IEEE Conference on Decision and Control; Vol. 2018-December). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/CDC.2018.8618708

**A Distributed Stochastic Gradient Tracking Method.** / Pu, Shi; Nedich, Angelia.

Research output: Chapter in Book/Report/Conference proceeding › Conference contribution

*2018 IEEE Conference on Decision and Control, CDC 2018.*, 8618708, Proceedings of the IEEE Conference on Decision and Control, vol. 2018-December, Institute of Electrical and Electronics Engineers Inc., pp. 963-968, 57th IEEE Conference on Decision and Control, CDC 2018, Miami, United States, 12/17/18. https://doi.org/10.1109/CDC.2018.8618708

}

TY - GEN

T1 - A Distributed Stochastic Gradient Tracking Method

AU - Pu, Shi

AU - Nedich, Angelia

PY - 2019/1/18

Y1 - 2019/1/18

N2 - In this paper, we study the problem of distributed multi-agent optimization over a network, where each agent possesses a local cost function that is smooth and strongly convex. The global objective is to find a common solution that minimizes the average of all cost functions. Assuming agents only have access to unbiased estimates of the gradients of their local cost functions, we consider a distributed stochastic gradient tracking method. We show that, in expectation, the iterates generated by each agent are attracted to a neighborhood of the optimal solution, where they accumulate exponentially fast (under a constant step size choice). More importantly, the limiting (expected) error bounds on the distance of the iterates from the optimal solution decrease with the network size, which is a comparable performance to a centralized stochastic gradient algorithm. Numerical examples further demonstrate the effectiveness of the method.

AB - In this paper, we study the problem of distributed multi-agent optimization over a network, where each agent possesses a local cost function that is smooth and strongly convex. The global objective is to find a common solution that minimizes the average of all cost functions. Assuming agents only have access to unbiased estimates of the gradients of their local cost functions, we consider a distributed stochastic gradient tracking method. We show that, in expectation, the iterates generated by each agent are attracted to a neighborhood of the optimal solution, where they accumulate exponentially fast (under a constant step size choice). More importantly, the limiting (expected) error bounds on the distance of the iterates from the optimal solution decrease with the network size, which is a comparable performance to a centralized stochastic gradient algorithm. Numerical examples further demonstrate the effectiveness of the method.

UR - http://www.scopus.com/inward/record.url?scp=85062168618&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85062168618&partnerID=8YFLogxK

U2 - 10.1109/CDC.2018.8618708

DO - 10.1109/CDC.2018.8618708

M3 - Conference contribution

T3 - Proceedings of the IEEE Conference on Decision and Control

SP - 963

EP - 968

BT - 2018 IEEE Conference on Decision and Control, CDC 2018

PB - Institute of Electrical and Electronics Engineers Inc.

ER -