Distributed subgradient methods and quantization effects

Angelia Nedich, Alex Olshevsky, Asuman Ozdaglar, John N. Tsitsiklis

Research output: Chapter in Book/Report/Conference proceedingConference contribution

80 Citations (Scopus)

Abstract

We consider a convex unconstrained optimization problem that arises in a network of agents whose goal is to cooperatively optimize the sum of the individual agent objective functions through local computations and communications. For this problem, we use averaging algorithms to develop distributed subgradient methods that can operate over a timevarying topology. Our focus is on the convergence rate of these methods and the degradation in performance when only quantized information is available. Based on our recent results on the convergence time of distributed averaging algorithms, we derive improved upper bounds on the convergence rate of the unquantized subgradient method. We then propose a distributed subgradient method under the additional constraint that agents can only store and communicate quantized information, and we provide bounds on its convergence rate that highlight the dependence on the number of quantization levels.

Original languageEnglish (US)
Title of host publicationProceedings of the 47th IEEE Conference on Decision and Control, CDC 2008
Pages4177-4184
Number of pages8
DOIs
StatePublished - 2008
Externally publishedYes
Event47th IEEE Conference on Decision and Control, CDC 2008 - Cancun, Mexico
Duration: Dec 9 2008Dec 11 2008

Other

Other47th IEEE Conference on Decision and Control, CDC 2008
CountryMexico
CityCancun
Period12/9/0812/11/08

Fingerprint

Subgradient Method
Quantization
Averaging
Convex optimization
Rate of Convergence
Parallel algorithms
Local Computation
Convergence Time
Topology
Unconstrained Optimization
Convex Optimization
Degradation
Convergence Rate
Communication
Time-varying
Objective function
Optimise
Upper bound
Optimization Problem

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Modeling and Simulation
  • Control and Optimization

Cite this

Nedich, A., Olshevsky, A., Ozdaglar, A., & Tsitsiklis, J. N. (2008). Distributed subgradient methods and quantization effects. In Proceedings of the 47th IEEE Conference on Decision and Control, CDC 2008 (pp. 4177-4184). [4738860] https://doi.org/10.1109/CDC.2008.4738860

Distributed subgradient methods and quantization effects. / Nedich, Angelia; Olshevsky, Alex; Ozdaglar, Asuman; Tsitsiklis, John N.

Proceedings of the 47th IEEE Conference on Decision and Control, CDC 2008. 2008. p. 4177-4184 4738860.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Nedich, A, Olshevsky, A, Ozdaglar, A & Tsitsiklis, JN 2008, Distributed subgradient methods and quantization effects. in Proceedings of the 47th IEEE Conference on Decision and Control, CDC 2008., 4738860, pp. 4177-4184, 47th IEEE Conference on Decision and Control, CDC 2008, Cancun, Mexico, 12/9/08. https://doi.org/10.1109/CDC.2008.4738860
Nedich A, Olshevsky A, Ozdaglar A, Tsitsiklis JN. Distributed subgradient methods and quantization effects. In Proceedings of the 47th IEEE Conference on Decision and Control, CDC 2008. 2008. p. 4177-4184. 4738860 https://doi.org/10.1109/CDC.2008.4738860
Nedich, Angelia ; Olshevsky, Alex ; Ozdaglar, Asuman ; Tsitsiklis, John N. / Distributed subgradient methods and quantization effects. Proceedings of the 47th IEEE Conference on Decision and Control, CDC 2008. 2008. pp. 4177-4184
@inproceedings{06a191134bd74ebd9f4cd9c8cdfada3c,
title = "Distributed subgradient methods and quantization effects",
abstract = "We consider a convex unconstrained optimization problem that arises in a network of agents whose goal is to cooperatively optimize the sum of the individual agent objective functions through local computations and communications. For this problem, we use averaging algorithms to develop distributed subgradient methods that can operate over a timevarying topology. Our focus is on the convergence rate of these methods and the degradation in performance when only quantized information is available. Based on our recent results on the convergence time of distributed averaging algorithms, we derive improved upper bounds on the convergence rate of the unquantized subgradient method. We then propose a distributed subgradient method under the additional constraint that agents can only store and communicate quantized information, and we provide bounds on its convergence rate that highlight the dependence on the number of quantization levels.",
author = "Angelia Nedich and Alex Olshevsky and Asuman Ozdaglar and Tsitsiklis, {John N.}",
year = "2008",
doi = "10.1109/CDC.2008.4738860",
language = "English (US)",
isbn = "9781424431243",
pages = "4177--4184",
booktitle = "Proceedings of the 47th IEEE Conference on Decision and Control, CDC 2008",

}

TY - GEN

T1 - Distributed subgradient methods and quantization effects

AU - Nedich, Angelia

AU - Olshevsky, Alex

AU - Ozdaglar, Asuman

AU - Tsitsiklis, John N.

PY - 2008

Y1 - 2008

N2 - We consider a convex unconstrained optimization problem that arises in a network of agents whose goal is to cooperatively optimize the sum of the individual agent objective functions through local computations and communications. For this problem, we use averaging algorithms to develop distributed subgradient methods that can operate over a timevarying topology. Our focus is on the convergence rate of these methods and the degradation in performance when only quantized information is available. Based on our recent results on the convergence time of distributed averaging algorithms, we derive improved upper bounds on the convergence rate of the unquantized subgradient method. We then propose a distributed subgradient method under the additional constraint that agents can only store and communicate quantized information, and we provide bounds on its convergence rate that highlight the dependence on the number of quantization levels.

AB - We consider a convex unconstrained optimization problem that arises in a network of agents whose goal is to cooperatively optimize the sum of the individual agent objective functions through local computations and communications. For this problem, we use averaging algorithms to develop distributed subgradient methods that can operate over a timevarying topology. Our focus is on the convergence rate of these methods and the degradation in performance when only quantized information is available. Based on our recent results on the convergence time of distributed averaging algorithms, we derive improved upper bounds on the convergence rate of the unquantized subgradient method. We then propose a distributed subgradient method under the additional constraint that agents can only store and communicate quantized information, and we provide bounds on its convergence rate that highlight the dependence on the number of quantization levels.

UR - http://www.scopus.com/inward/record.url?scp=62949102367&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=62949102367&partnerID=8YFLogxK

U2 - 10.1109/CDC.2008.4738860

DO - 10.1109/CDC.2008.4738860

M3 - Conference contribution

AN - SCOPUS:62949102367

SN - 9781424431243

SP - 4177

EP - 4184

BT - Proceedings of the 47th IEEE Conference on Decision and Control, CDC 2008

ER -