Abstract
We investigate the convergence rate of the recently proposed subgradient-push method for distributed optimization over time-varying directed graphs. The subgradient-push method can be implemented in a distributed way without requiring knowledge of either the number of agents or the graph sequence; each node is only required to know its out-degree at each time. Our main result is a convergence rate of O((ln t)/t) for strongly convex functions with Lipschitz gradients even if only stochastic gradient samples are available; this is asymptotically faster than the O((ln t)/t) rate previously known for (general) convex functions.
Original language | English (US) |
---|---|
Article number | 7405263 |
Pages (from-to) | 3936-3947 |
Number of pages | 12 |
Journal | IEEE Transactions on Automatic Control |
Volume | 61 |
Issue number | 12 |
DOIs | |
State | Published - Dec 2016 |
Externally published | Yes |
Keywords
- Distributed algorithms
- gradient methods
- parameter estimation
ASJC Scopus subject areas
- Control and Systems Engineering
- Computer Science Applications
- Electrical and Electronic Engineering