### Abstract

In this paper, we study the problem of distributed multi-agent optimization over a network, where each agent possesses a local cost function that is smooth and strongly convex. The global objective is to find a common solution that minimizes the average of all cost functions. Assuming agents only have access to unbiased estimates of the gradients of their local cost functions, we consider a distributed stochastic gradient tracking method (DSGT) and a gossip-like stochastic gradient tracking method (GSGT). We show that, in expectation, the iterates generated by each agent are attracted to a neighborhood of the optimal solution, where they accumulate exponentially fast (under a constant stepsize choice). Under DSGT, the limiting (expected) error bounds on the distance of the iterates from the optimal solution decrease with the network size n, which is a comparable performance to a centralized stochastic gradient algorithm. Moreover, we show that when the network is well-connected, GSGT incurs lower communication cost than DSGT while maintaining a similar computational cost. Numerical example further demonstrates the effectiveness of the proposed methods.

Original language | English (US) |
---|---|

Journal | Mathematical Programming |

DOIs | |

State | Accepted/In press - Jan 1 2020 |

### Keywords

- Communication networks
- Convex programming
- Distributed optimization
- Stochastic optimization

### ASJC Scopus subject areas

- Software
- Mathematics(all)

## Fingerprint Dive into the research topics of 'Distributed stochastic gradient tracking methods'. Together they form a unique fingerprint.

## Cite this

*Mathematical Programming*. https://doi.org/10.1007/s10107-020-01487-0