Distributed Stochastic Subgradient Projection Algorithms for Convex Optimization

S. Sundhar Ram, Angelia Nedich, V. V. Veeravalli

Research output: Contribution to journalArticle

330 Citations (Scopus)

Abstract

We consider a distributed multi-agent network system where the goal is to minimize a sum of convex objective functions of the agents subject to a common convex constraint set. Each agent maintains an iterate sequence and communicates the iterates to its neighbors. Then, each agent combines weighted averages of the received iterates with its own iterate, and adjusts the iterate by using subgradient information (known with stochastic errors) of its own function and by projecting onto the constraint set. The goal of this paper is to explore the effects of stochastic subgradient errors on the convergence of the algorithm. We first consider the behavior of the algorithm in mean, and then the convergence with probability 1 and in mean square. We consider general stochastic errors that have uniformly bounded second moments and obtain bounds on the limiting performance of the algorithm in mean for diminishing and non-diminishing stepsizes. When the means of the errors diminish, we prove that there is mean consensus between the agents and mean convergence to the optimum function value for diminishing stepsizes. When the mean errors diminish sufficiently fast, we strengthen the results to consensus and convergence of the iterates to an optimal solution with probability 1 and in mean square.

Original languageEnglish (US)
Pages (from-to)516-545
Number of pages30
JournalJournal of Optimization Theory and Applications
Volume147
Issue number3
DOIs
StatePublished - Dec 2010
Externally publishedYes

Fingerprint

Subgradient
Projection Algorithm
Convex optimization
Convex Optimization
Iterate
Diminishing
Mean Square
Mean Convergence
Convex Constraints
Weighted Average
Value Function
Convex function
Objective function
Optimal Solution
Limiting
Moment
Minimise

Keywords

  • Convex optimization
  • Distributed algorithm
  • Stochastic approximation
  • Subgradient methods

ASJC Scopus subject areas

  • Applied Mathematics
  • Control and Optimization
  • Management Science and Operations Research

Cite this

Distributed Stochastic Subgradient Projection Algorithms for Convex Optimization. / Sundhar Ram, S.; Nedich, Angelia; Veeravalli, V. V.

In: Journal of Optimization Theory and Applications, Vol. 147, No. 3, 12.2010, p. 516-545.

Research output: Contribution to journalArticle

@article{ed93658984324067920e3b4ad87eae70,
title = "Distributed Stochastic Subgradient Projection Algorithms for Convex Optimization",
abstract = "We consider a distributed multi-agent network system where the goal is to minimize a sum of convex objective functions of the agents subject to a common convex constraint set. Each agent maintains an iterate sequence and communicates the iterates to its neighbors. Then, each agent combines weighted averages of the received iterates with its own iterate, and adjusts the iterate by using subgradient information (known with stochastic errors) of its own function and by projecting onto the constraint set. The goal of this paper is to explore the effects of stochastic subgradient errors on the convergence of the algorithm. We first consider the behavior of the algorithm in mean, and then the convergence with probability 1 and in mean square. We consider general stochastic errors that have uniformly bounded second moments and obtain bounds on the limiting performance of the algorithm in mean for diminishing and non-diminishing stepsizes. When the means of the errors diminish, we prove that there is mean consensus between the agents and mean convergence to the optimum function value for diminishing stepsizes. When the mean errors diminish sufficiently fast, we strengthen the results to consensus and convergence of the iterates to an optimal solution with probability 1 and in mean square.",
keywords = "Convex optimization, Distributed algorithm, Stochastic approximation, Subgradient methods",
author = "{Sundhar Ram}, S. and Angelia Nedich and Veeravalli, {V. V.}",
year = "2010",
month = "12",
doi = "10.1007/s10957-010-9737-7",
language = "English (US)",
volume = "147",
pages = "516--545",
journal = "Journal of Optimization Theory and Applications",
issn = "0022-3239",
publisher = "Springer New York",
number = "3",

}

TY - JOUR

T1 - Distributed Stochastic Subgradient Projection Algorithms for Convex Optimization

AU - Sundhar Ram, S.

AU - Nedich, Angelia

AU - Veeravalli, V. V.

PY - 2010/12

Y1 - 2010/12

N2 - We consider a distributed multi-agent network system where the goal is to minimize a sum of convex objective functions of the agents subject to a common convex constraint set. Each agent maintains an iterate sequence and communicates the iterates to its neighbors. Then, each agent combines weighted averages of the received iterates with its own iterate, and adjusts the iterate by using subgradient information (known with stochastic errors) of its own function and by projecting onto the constraint set. The goal of this paper is to explore the effects of stochastic subgradient errors on the convergence of the algorithm. We first consider the behavior of the algorithm in mean, and then the convergence with probability 1 and in mean square. We consider general stochastic errors that have uniformly bounded second moments and obtain bounds on the limiting performance of the algorithm in mean for diminishing and non-diminishing stepsizes. When the means of the errors diminish, we prove that there is mean consensus between the agents and mean convergence to the optimum function value for diminishing stepsizes. When the mean errors diminish sufficiently fast, we strengthen the results to consensus and convergence of the iterates to an optimal solution with probability 1 and in mean square.

AB - We consider a distributed multi-agent network system where the goal is to minimize a sum of convex objective functions of the agents subject to a common convex constraint set. Each agent maintains an iterate sequence and communicates the iterates to its neighbors. Then, each agent combines weighted averages of the received iterates with its own iterate, and adjusts the iterate by using subgradient information (known with stochastic errors) of its own function and by projecting onto the constraint set. The goal of this paper is to explore the effects of stochastic subgradient errors on the convergence of the algorithm. We first consider the behavior of the algorithm in mean, and then the convergence with probability 1 and in mean square. We consider general stochastic errors that have uniformly bounded second moments and obtain bounds on the limiting performance of the algorithm in mean for diminishing and non-diminishing stepsizes. When the means of the errors diminish, we prove that there is mean consensus between the agents and mean convergence to the optimum function value for diminishing stepsizes. When the mean errors diminish sufficiently fast, we strengthen the results to consensus and convergence of the iterates to an optimal solution with probability 1 and in mean square.

KW - Convex optimization

KW - Distributed algorithm

KW - Stochastic approximation

KW - Subgradient methods

UR - http://www.scopus.com/inward/record.url?scp=78049361018&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=78049361018&partnerID=8YFLogxK

U2 - 10.1007/s10957-010-9737-7

DO - 10.1007/s10957-010-9737-7

M3 - Article

AN - SCOPUS:78049361018

VL - 147

SP - 516

EP - 545

JO - Journal of Optimization Theory and Applications

JF - Journal of Optimization Theory and Applications

SN - 0022-3239

IS - 3

ER -