Achieving geometric convergence for distributed optimization over time-varying graphs

Angelia Nedich, Alex Olshevsky, And Wei Shi

Research output: Contribution to journalArticle

113 Scopus citations

Abstract

This paper considers the problem of distributed optimization over time-varying graphs. For the case of undirected graphs, we introduce a distributed algorithm, referred to as DIGing, based on a combination of a distributed inexact gradient method and a gradient tracking technique. The DIGing algorithm uses doubly stochastic mixing matrices and employs fixed stepsizes and, yet, drives all the agents’ iterates to a global and consensual minimizer. When the graphs are directed, in which case the implementation of doubly stochastic mixing matrices is unrealistic, we construct an algorithm that incorporates the push-sum protocol into the DIGing structure, thus obtaining the Push-DIGing algorithm. Push-DIGing uses column stochastic matrices and fixed step-sizes, but it still converges to a global and consensual minimizer. Under the strong convexity assumption, we prove that the algorithms converge at R-linear (geometric) rates as long as the stepsizes do not exceed some upper bounds. We establish explicit estimates for the convergence rates. When the graph is undirected it shows that DIGing scales polynomially in the number of agents. We also provide some numerical experiments to demonstrate the efficacy of the proposed algorithms and to validate our theoretical findings.

Original languageEnglish (US)
Pages (from-to)2597-2633
Number of pages37
JournalSIAM Journal on Optimization
Volume27
Issue number4
DOIs
StatePublished - Jan 1 2017

    Fingerprint

Keywords

  • Distributed optimization
  • Inexact gradient
  • Linear convergence
  • Small gain theorem
  • Time-varying graphs

ASJC Scopus subject areas

  • Software
  • Theoretical Computer Science

Cite this