Distributed stochastic gradient tracking methods

Research output: Contribution to journalArticlepeer-review

82 Scopus citations

Abstract

In this paper, we study the problem of distributed multi-agent optimization over a network, where each agent possesses a local cost function that is smooth and strongly convex. The global objective is to find a common solution that minimizes the average of all cost functions. Assuming agents only have access to unbiased estimates of the gradients of their local cost functions, we consider a distributed stochastic gradient tracking method (DSGT) and a gossip-like stochastic gradient tracking method (GSGT). We show that, in expectation, the iterates generated by each agent are attracted to a neighborhood of the optimal solution, where they accumulate exponentially fast (under a constant stepsize choice). Under DSGT, the limiting (expected) error bounds on the distance of the iterates from the optimal solution decrease with the network size n, which is a comparable performance to a centralized stochastic gradient algorithm. Moreover, we show that when the network is well-connected, GSGT incurs lower communication cost than DSGT while maintaining a similar computational cost. Numerical example further demonstrates the effectiveness of the proposed methods.

Original languageEnglish (US)
Pages (from-to)409-457
Number of pages49
JournalMathematical Programming
Volume187
Issue number1-2
DOIs
StatePublished - May 2021

Keywords

  • Communication networks
  • Convex programming
  • Distributed optimization
  • Stochastic optimization

ASJC Scopus subject areas

  • Software
  • General Mathematics

Fingerprint

Dive into the research topics of 'Distributed stochastic gradient tracking methods'. Together they form a unique fingerprint.

Cite this