Achieving geometric convergence for distributed optimization over time-varying graphs

Angelia Nedich, Alex Olshevsky, And Wei Shi

Research output: Contribution to journalArticle

60 Citations (Scopus)

Abstract

This paper considers the problem of distributed optimization over time-varying graphs. For the case of undirected graphs, we introduce a distributed algorithm, referred to as DIGing, based on a combination of a distributed inexact gradient method and a gradient tracking technique. The DIGing algorithm uses doubly stochastic mixing matrices and employs fixed stepsizes and, yet, drives all the agents’ iterates to a global and consensual minimizer. When the graphs are directed, in which case the implementation of doubly stochastic mixing matrices is unrealistic, we construct an algorithm that incorporates the push-sum protocol into the DIGing structure, thus obtaining the Push-DIGing algorithm. Push-DIGing uses column stochastic matrices and fixed step-sizes, but it still converges to a global and consensual minimizer. Under the strong convexity assumption, we prove that the algorithms converge at R-linear (geometric) rates as long as the stepsizes do not exceed some upper bounds. We establish explicit estimates for the convergence rates. When the graph is undirected it shows that DIGing scales polynomially in the number of agents. We also provide some numerical experiments to demonstrate the efficacy of the proposed algorithms and to validate our theoretical findings.

Original languageEnglish (US)
Pages (from-to)2597-2633
Number of pages37
JournalSIAM Journal on Optimization
Volume27
Issue number4
DOIs
StatePublished - Jan 1 2017

Fingerprint

Geometric Convergence
Distributed Optimization
Time-varying
Graph in graph theory
Minimizer
Inexact Methods
Converge
Stochastic Matrix
Gradient methods
Directed graphs
Gradient Method
Distributed Algorithms
Iterate
Parallel algorithms
Undirected Graph
Convergence Rate
Convexity
Efficacy
Exceed
Numerical Experiment

Keywords

  • Distributed optimization
  • Inexact gradient
  • Linear convergence
  • Small gain theorem
  • Time-varying graphs

ASJC Scopus subject areas

  • Software
  • Theoretical Computer Science

Cite this

Achieving geometric convergence for distributed optimization over time-varying graphs. / Nedich, Angelia; Olshevsky, Alex; Shi, And Wei.

In: SIAM Journal on Optimization, Vol. 27, No. 4, 01.01.2017, p. 2597-2633.

Research output: Contribution to journalArticle

@article{eadafb67e9164eb2b0925878810c25c4,
title = "Achieving geometric convergence for distributed optimization over time-varying graphs",
abstract = "This paper considers the problem of distributed optimization over time-varying graphs. For the case of undirected graphs, we introduce a distributed algorithm, referred to as DIGing, based on a combination of a distributed inexact gradient method and a gradient tracking technique. The DIGing algorithm uses doubly stochastic mixing matrices and employs fixed stepsizes and, yet, drives all the agents’ iterates to a global and consensual minimizer. When the graphs are directed, in which case the implementation of doubly stochastic mixing matrices is unrealistic, we construct an algorithm that incorporates the push-sum protocol into the DIGing structure, thus obtaining the Push-DIGing algorithm. Push-DIGing uses column stochastic matrices and fixed step-sizes, but it still converges to a global and consensual minimizer. Under the strong convexity assumption, we prove that the algorithms converge at R-linear (geometric) rates as long as the stepsizes do not exceed some upper bounds. We establish explicit estimates for the convergence rates. When the graph is undirected it shows that DIGing scales polynomially in the number of agents. We also provide some numerical experiments to demonstrate the efficacy of the proposed algorithms and to validate our theoretical findings.",
keywords = "Distributed optimization, Inexact gradient, Linear convergence, Small gain theorem, Time-varying graphs",
author = "Angelia Nedich and Alex Olshevsky and Shi, {And Wei}",
year = "2017",
month = "1",
day = "1",
doi = "10.1137/16M1084316",
language = "English (US)",
volume = "27",
pages = "2597--2633",
journal = "SIAM Journal on Optimization",
issn = "1052-6234",
publisher = "Society for Industrial and Applied Mathematics Publications",
number = "4",

}

TY - JOUR

T1 - Achieving geometric convergence for distributed optimization over time-varying graphs

AU - Nedich, Angelia

AU - Olshevsky, Alex

AU - Shi, And Wei

PY - 2017/1/1

Y1 - 2017/1/1

N2 - This paper considers the problem of distributed optimization over time-varying graphs. For the case of undirected graphs, we introduce a distributed algorithm, referred to as DIGing, based on a combination of a distributed inexact gradient method and a gradient tracking technique. The DIGing algorithm uses doubly stochastic mixing matrices and employs fixed stepsizes and, yet, drives all the agents’ iterates to a global and consensual minimizer. When the graphs are directed, in which case the implementation of doubly stochastic mixing matrices is unrealistic, we construct an algorithm that incorporates the push-sum protocol into the DIGing structure, thus obtaining the Push-DIGing algorithm. Push-DIGing uses column stochastic matrices and fixed step-sizes, but it still converges to a global and consensual minimizer. Under the strong convexity assumption, we prove that the algorithms converge at R-linear (geometric) rates as long as the stepsizes do not exceed some upper bounds. We establish explicit estimates for the convergence rates. When the graph is undirected it shows that DIGing scales polynomially in the number of agents. We also provide some numerical experiments to demonstrate the efficacy of the proposed algorithms and to validate our theoretical findings.

AB - This paper considers the problem of distributed optimization over time-varying graphs. For the case of undirected graphs, we introduce a distributed algorithm, referred to as DIGing, based on a combination of a distributed inexact gradient method and a gradient tracking technique. The DIGing algorithm uses doubly stochastic mixing matrices and employs fixed stepsizes and, yet, drives all the agents’ iterates to a global and consensual minimizer. When the graphs are directed, in which case the implementation of doubly stochastic mixing matrices is unrealistic, we construct an algorithm that incorporates the push-sum protocol into the DIGing structure, thus obtaining the Push-DIGing algorithm. Push-DIGing uses column stochastic matrices and fixed step-sizes, but it still converges to a global and consensual minimizer. Under the strong convexity assumption, we prove that the algorithms converge at R-linear (geometric) rates as long as the stepsizes do not exceed some upper bounds. We establish explicit estimates for the convergence rates. When the graph is undirected it shows that DIGing scales polynomially in the number of agents. We also provide some numerical experiments to demonstrate the efficacy of the proposed algorithms and to validate our theoretical findings.

KW - Distributed optimization

KW - Inexact gradient

KW - Linear convergence

KW - Small gain theorem

KW - Time-varying graphs

UR - http://www.scopus.com/inward/record.url?scp=85040745306&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85040745306&partnerID=8YFLogxK

U2 - 10.1137/16M1084316

DO - 10.1137/16M1084316

M3 - Article

VL - 27

SP - 2597

EP - 2633

JO - SIAM Journal on Optimization

JF - SIAM Journal on Optimization

SN - 1052-6234

IS - 4

ER -