Abstract

We propose and analyze a new stochastic gradient method, which we call Stochastic Unbiased Curvature-aided Gradient (SUCAG), for finite sum optimization problems. SUCAG constitutes an unbiased total gradient tracking technique that uses Hessian information to accelerate convergence. We analyze our method under the general asynchronous model of computation, in which each function is selected infinitely often with possibly unbounded (but sublinear) delay. For strongly convex problems, we establish linear convergence for the SUCAG method. When the initialization point is sufficiently close to the optimal solution, the established convergence rate is only dependent on the condition number of the problem, making it strictly faster than the known rate for the SAGA method. Furthermore, we describe a Markov-driven approach of implementing the SUCAG method in a distributed asynchronous multi-agent setting, via gossiping along a random walk on an undirected communication graph. We show that our analysis applies as long as the graph is connected and, notably, establishes an asymptotic linear convergence rate that is robust to the graph topology. Numerical results demonstrate the merits of our algorithm over existing methods.

Original languageEnglish (US)
Title of host publication2018 IEEE Conference on Decision and Control, CDC 2018
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1751-1756
Number of pages6
ISBN (Electronic)9781538613955
DOIs
StatePublished - Jan 18 2019
Event57th IEEE Conference on Decision and Control, CDC 2018 - Miami, United States
Duration: Dec 17 2018Dec 19 2018

Publication series

NameProceedings of the IEEE Conference on Decision and Control
Volume2018-December
ISSN (Print)0743-1546

Conference

Conference57th IEEE Conference on Decision and Control, CDC 2018
CountryUnited States
CityMiami
Period12/17/1812/19/18

Fingerprint

Distributed Optimization
Stochastic Gradient
Gradient methods
Gradient Method
Curvature
Linear Convergence
Gradient
Information use
Convergence Rate
Graph in graph theory
Gossiping
Topology
Asymptotic Convergence
Models of Computation
Stochastic Methods
Condition number
Initialization
Communication
Accelerate
Random walk

Keywords

  • - Distributed optimization
  • Asynchronous algorithms
  • Incremental methods
  • Machine learning
  • Multiagent systems
  • Randomized algorithms

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Modeling and Simulation
  • Control and Optimization

Cite this

Wai, H. T., Freris, N. M., Nedich, A., & Scaglione, A. (2019). SUCAG: Stochastic Unbiased Curvature-aided Gradient Method for Distributed Optimization. In 2018 IEEE Conference on Decision and Control, CDC 2018 (pp. 1751-1756). [8619336] (Proceedings of the IEEE Conference on Decision and Control; Vol. 2018-December). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/CDC.2018.8619336

SUCAG : Stochastic Unbiased Curvature-aided Gradient Method for Distributed Optimization. / Wai, Hoi To; Freris, Nikolaos M.; Nedich, Angelia; Scaglione, Anna.

2018 IEEE Conference on Decision and Control, CDC 2018. Institute of Electrical and Electronics Engineers Inc., 2019. p. 1751-1756 8619336 (Proceedings of the IEEE Conference on Decision and Control; Vol. 2018-December).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Wai, HT, Freris, NM, Nedich, A & Scaglione, A 2019, SUCAG: Stochastic Unbiased Curvature-aided Gradient Method for Distributed Optimization. in 2018 IEEE Conference on Decision and Control, CDC 2018., 8619336, Proceedings of the IEEE Conference on Decision and Control, vol. 2018-December, Institute of Electrical and Electronics Engineers Inc., pp. 1751-1756, 57th IEEE Conference on Decision and Control, CDC 2018, Miami, United States, 12/17/18. https://doi.org/10.1109/CDC.2018.8619336
Wai HT, Freris NM, Nedich A, Scaglione A. SUCAG: Stochastic Unbiased Curvature-aided Gradient Method for Distributed Optimization. In 2018 IEEE Conference on Decision and Control, CDC 2018. Institute of Electrical and Electronics Engineers Inc. 2019. p. 1751-1756. 8619336. (Proceedings of the IEEE Conference on Decision and Control). https://doi.org/10.1109/CDC.2018.8619336
Wai, Hoi To ; Freris, Nikolaos M. ; Nedich, Angelia ; Scaglione, Anna. / SUCAG : Stochastic Unbiased Curvature-aided Gradient Method for Distributed Optimization. 2018 IEEE Conference on Decision and Control, CDC 2018. Institute of Electrical and Electronics Engineers Inc., 2019. pp. 1751-1756 (Proceedings of the IEEE Conference on Decision and Control).
@inproceedings{968bc3e235fc408dba34c1a37eee61b6,
title = "SUCAG: Stochastic Unbiased Curvature-aided Gradient Method for Distributed Optimization",
abstract = "We propose and analyze a new stochastic gradient method, which we call Stochastic Unbiased Curvature-aided Gradient (SUCAG), for finite sum optimization problems. SUCAG constitutes an unbiased total gradient tracking technique that uses Hessian information to accelerate convergence. We analyze our method under the general asynchronous model of computation, in which each function is selected infinitely often with possibly unbounded (but sublinear) delay. For strongly convex problems, we establish linear convergence for the SUCAG method. When the initialization point is sufficiently close to the optimal solution, the established convergence rate is only dependent on the condition number of the problem, making it strictly faster than the known rate for the SAGA method. Furthermore, we describe a Markov-driven approach of implementing the SUCAG method in a distributed asynchronous multi-agent setting, via gossiping along a random walk on an undirected communication graph. We show that our analysis applies as long as the graph is connected and, notably, establishes an asymptotic linear convergence rate that is robust to the graph topology. Numerical results demonstrate the merits of our algorithm over existing methods.",
keywords = "- Distributed optimization, Asynchronous algorithms, Incremental methods, Machine learning, Multiagent systems, Randomized algorithms",
author = "Wai, {Hoi To} and Freris, {Nikolaos M.} and Angelia Nedich and Anna Scaglione",
year = "2019",
month = "1",
day = "18",
doi = "10.1109/CDC.2018.8619336",
language = "English (US)",
series = "Proceedings of the IEEE Conference on Decision and Control",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "1751--1756",
booktitle = "2018 IEEE Conference on Decision and Control, CDC 2018",

}

TY - GEN

T1 - SUCAG

T2 - Stochastic Unbiased Curvature-aided Gradient Method for Distributed Optimization

AU - Wai, Hoi To

AU - Freris, Nikolaos M.

AU - Nedich, Angelia

AU - Scaglione, Anna

PY - 2019/1/18

Y1 - 2019/1/18

N2 - We propose and analyze a new stochastic gradient method, which we call Stochastic Unbiased Curvature-aided Gradient (SUCAG), for finite sum optimization problems. SUCAG constitutes an unbiased total gradient tracking technique that uses Hessian information to accelerate convergence. We analyze our method under the general asynchronous model of computation, in which each function is selected infinitely often with possibly unbounded (but sublinear) delay. For strongly convex problems, we establish linear convergence for the SUCAG method. When the initialization point is sufficiently close to the optimal solution, the established convergence rate is only dependent on the condition number of the problem, making it strictly faster than the known rate for the SAGA method. Furthermore, we describe a Markov-driven approach of implementing the SUCAG method in a distributed asynchronous multi-agent setting, via gossiping along a random walk on an undirected communication graph. We show that our analysis applies as long as the graph is connected and, notably, establishes an asymptotic linear convergence rate that is robust to the graph topology. Numerical results demonstrate the merits of our algorithm over existing methods.

AB - We propose and analyze a new stochastic gradient method, which we call Stochastic Unbiased Curvature-aided Gradient (SUCAG), for finite sum optimization problems. SUCAG constitutes an unbiased total gradient tracking technique that uses Hessian information to accelerate convergence. We analyze our method under the general asynchronous model of computation, in which each function is selected infinitely often with possibly unbounded (but sublinear) delay. For strongly convex problems, we establish linear convergence for the SUCAG method. When the initialization point is sufficiently close to the optimal solution, the established convergence rate is only dependent on the condition number of the problem, making it strictly faster than the known rate for the SAGA method. Furthermore, we describe a Markov-driven approach of implementing the SUCAG method in a distributed asynchronous multi-agent setting, via gossiping along a random walk on an undirected communication graph. We show that our analysis applies as long as the graph is connected and, notably, establishes an asymptotic linear convergence rate that is robust to the graph topology. Numerical results demonstrate the merits of our algorithm over existing methods.

KW - - Distributed optimization

KW - Asynchronous algorithms

KW - Incremental methods

KW - Machine learning

KW - Multiagent systems

KW - Randomized algorithms

UR - http://www.scopus.com/inward/record.url?scp=85062171498&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85062171498&partnerID=8YFLogxK

U2 - 10.1109/CDC.2018.8619336

DO - 10.1109/CDC.2018.8619336

M3 - Conference contribution

T3 - Proceedings of the IEEE Conference on Decision and Control

SP - 1751

EP - 1756

BT - 2018 IEEE Conference on Decision and Control, CDC 2018

PB - Institute of Electrical and Electronics Engineers Inc.

ER -