Abstract
This paper considers the problem of distributed optimization over time-varying graphs. For the case of undirected graphs, we introduce a distributed algorithm, referred to as DIGing, based on a combination of a distributed inexact gradient method and a gradient tracking technique. The DIGing algorithm uses doubly stochastic mixing matrices and employs fixed stepsizes and, yet, drives all the agents’ iterates to a global and consensual minimizer. When the graphs are directed, in which case the implementation of doubly stochastic mixing matrices is unrealistic, we construct an algorithm that incorporates the push-sum protocol into the DIGing structure, thus obtaining the Push-DIGing algorithm. Push-DIGing uses column stochastic matrices and fixed step-sizes, but it still converges to a global and consensual minimizer. Under the strong convexity assumption, we prove that the algorithms converge at R-linear (geometric) rates as long as the stepsizes do not exceed some upper bounds. We establish explicit estimates for the convergence rates. When the graph is undirected it shows that DIGing scales polynomially in the number of agents. We also provide some numerical experiments to demonstrate the efficacy of the proposed algorithms and to validate our theoretical findings.
Original language | English (US) |
---|---|
Pages (from-to) | 2597-2633 |
Number of pages | 37 |
Journal | SIAM Journal on Optimization |
Volume | 27 |
Issue number | 4 |
DOIs | |
State | Published - 2017 |
Keywords
- Distributed optimization
- Inexact gradient
- Linear convergence
- Small gain theorem
- Time-varying graphs
ASJC Scopus subject areas
- Software
- Theoretical Computer Science