Distributed asynchronous incremental subgradient methods

Angelia Nedich, D. P. Bertsekas, V. S. Borkar

Research output: Contribution to journalArticle

61 Citations (Scopus)

Abstract

We propose and analyze a distributed asynchronous subgradient method for minimizing a convex function that consists of the sum of a large number of component functions. This type of minimization arises in a dual context from Lagrangian relaxation of the coupling constraints of large scale separable problems. The idea is to distribute the computation of the component subgradients among a set of processors, which communicate only with a coordinator. The coordinator performs the subgradient iteration incrementally and asynchronously, by taking steps along the subgradients of the component funtions that are available at the update time. The incremental approach has performed very well in centralized computation, and the parallel implementation should improve its performance substantially, particularly for typical problems where computation of the component subgradients is relatively costly.

Original languageEnglish (US)
Pages (from-to)381-407
Number of pages27
JournalStudies in Computational Mathematics
Volume8
Issue numberC
DOIs
StatePublished - 2001
Externally publishedYes

Fingerprint

Subgradient Method
Subgradient
Lagrangian Relaxation
Number of Components
Parallel Implementation
Convex function
Update
Iteration

ASJC Scopus subject areas

  • Computational Mathematics

Cite this

Distributed asynchronous incremental subgradient methods. / Nedich, Angelia; Bertsekas, D. P.; Borkar, V. S.

In: Studies in Computational Mathematics, Vol. 8, No. C, 2001, p. 381-407.

Research output: Contribution to journalArticle

Nedich, Angelia ; Bertsekas, D. P. ; Borkar, V. S. / Distributed asynchronous incremental subgradient methods. In: Studies in Computational Mathematics. 2001 ; Vol. 8, No. C. pp. 381-407.
@article{5697ace5ee404905847ea903716b237e,
title = "Distributed asynchronous incremental subgradient methods",
abstract = "We propose and analyze a distributed asynchronous subgradient method for minimizing a convex function that consists of the sum of a large number of component functions. This type of minimization arises in a dual context from Lagrangian relaxation of the coupling constraints of large scale separable problems. The idea is to distribute the computation of the component subgradients among a set of processors, which communicate only with a coordinator. The coordinator performs the subgradient iteration incrementally and asynchronously, by taking steps along the subgradients of the component funtions that are available at the update time. The incremental approach has performed very well in centralized computation, and the parallel implementation should improve its performance substantially, particularly for typical problems where computation of the component subgradients is relatively costly.",
author = "Angelia Nedich and Bertsekas, {D. P.} and Borkar, {V. S.}",
year = "2001",
doi = "10.1016/S1570-579X(01)80023-9",
language = "English (US)",
volume = "8",
pages = "381--407",
journal = "Studies in Computational Mathematics",
issn = "1570-579X",
publisher = "Elsevier",
number = "C",

}

TY - JOUR

T1 - Distributed asynchronous incremental subgradient methods

AU - Nedich, Angelia

AU - Bertsekas, D. P.

AU - Borkar, V. S.

PY - 2001

Y1 - 2001

N2 - We propose and analyze a distributed asynchronous subgradient method for minimizing a convex function that consists of the sum of a large number of component functions. This type of minimization arises in a dual context from Lagrangian relaxation of the coupling constraints of large scale separable problems. The idea is to distribute the computation of the component subgradients among a set of processors, which communicate only with a coordinator. The coordinator performs the subgradient iteration incrementally and asynchronously, by taking steps along the subgradients of the component funtions that are available at the update time. The incremental approach has performed very well in centralized computation, and the parallel implementation should improve its performance substantially, particularly for typical problems where computation of the component subgradients is relatively costly.

AB - We propose and analyze a distributed asynchronous subgradient method for minimizing a convex function that consists of the sum of a large number of component functions. This type of minimization arises in a dual context from Lagrangian relaxation of the coupling constraints of large scale separable problems. The idea is to distribute the computation of the component subgradients among a set of processors, which communicate only with a coordinator. The coordinator performs the subgradient iteration incrementally and asynchronously, by taking steps along the subgradients of the component funtions that are available at the update time. The incremental approach has performed very well in centralized computation, and the parallel implementation should improve its performance substantially, particularly for typical problems where computation of the component subgradients is relatively costly.

UR - http://www.scopus.com/inward/record.url?scp=77956652909&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=77956652909&partnerID=8YFLogxK

U2 - 10.1016/S1570-579X(01)80023-9

DO - 10.1016/S1570-579X(01)80023-9

M3 - Article

AN - SCOPUS:77956652909

VL - 8

SP - 381

EP - 407

JO - Studies in Computational Mathematics

JF - Studies in Computational Mathematics

SN - 1570-579X

IS - C

ER -