### Abstract

We consider a distributed multi-agent network system where the goal is to minimize a sum of convex objective functions of the agents subject to a common convex constraint set. Each agent maintains an iterate sequence and communicates the iterates to its neighbors. Then, each agent combines weighted averages of the received iterates with its own iterate, and adjusts the iterate by using subgradient information (known with stochastic errors) of its own function and by projecting onto the constraint set. The goal of this paper is to explore the effects of stochastic subgradient errors on the convergence of the algorithm. We first consider the behavior of the algorithm in mean, and then the convergence with probability 1 and in mean square. We consider general stochastic errors that have uniformly bounded second moments and obtain bounds on the limiting performance of the algorithm in mean for diminishing and non-diminishing stepsizes. When the means of the errors diminish, we prove that there is mean consensus between the agents and mean convergence to the optimum function value for diminishing stepsizes. When the mean errors diminish sufficiently fast, we strengthen the results to consensus and convergence of the iterates to an optimal solution with probability 1 and in mean square.

Original language | English (US) |
---|---|

Pages (from-to) | 516-545 |

Number of pages | 30 |

Journal | Journal of Optimization Theory and Applications |

Volume | 147 |

Issue number | 3 |

DOIs | |

State | Published - Dec 2010 |

Externally published | Yes |

### Fingerprint

### Keywords

- Convex optimization
- Distributed algorithm
- Stochastic approximation
- Subgradient methods

### ASJC Scopus subject areas

- Applied Mathematics
- Control and Optimization
- Management Science and Operations Research

### Cite this

*Journal of Optimization Theory and Applications*,

*147*(3), 516-545. https://doi.org/10.1007/s10957-010-9737-7

**Distributed Stochastic Subgradient Projection Algorithms for Convex Optimization.** / Sundhar Ram, S.; Nedich, Angelia; Veeravalli, V. V.

Research output: Contribution to journal › Article

*Journal of Optimization Theory and Applications*, vol. 147, no. 3, pp. 516-545. https://doi.org/10.1007/s10957-010-9737-7

}

TY - JOUR

T1 - Distributed Stochastic Subgradient Projection Algorithms for Convex Optimization

AU - Sundhar Ram, S.

AU - Nedich, Angelia

AU - Veeravalli, V. V.

PY - 2010/12

Y1 - 2010/12

N2 - We consider a distributed multi-agent network system where the goal is to minimize a sum of convex objective functions of the agents subject to a common convex constraint set. Each agent maintains an iterate sequence and communicates the iterates to its neighbors. Then, each agent combines weighted averages of the received iterates with its own iterate, and adjusts the iterate by using subgradient information (known with stochastic errors) of its own function and by projecting onto the constraint set. The goal of this paper is to explore the effects of stochastic subgradient errors on the convergence of the algorithm. We first consider the behavior of the algorithm in mean, and then the convergence with probability 1 and in mean square. We consider general stochastic errors that have uniformly bounded second moments and obtain bounds on the limiting performance of the algorithm in mean for diminishing and non-diminishing stepsizes. When the means of the errors diminish, we prove that there is mean consensus between the agents and mean convergence to the optimum function value for diminishing stepsizes. When the mean errors diminish sufficiently fast, we strengthen the results to consensus and convergence of the iterates to an optimal solution with probability 1 and in mean square.

AB - We consider a distributed multi-agent network system where the goal is to minimize a sum of convex objective functions of the agents subject to a common convex constraint set. Each agent maintains an iterate sequence and communicates the iterates to its neighbors. Then, each agent combines weighted averages of the received iterates with its own iterate, and adjusts the iterate by using subgradient information (known with stochastic errors) of its own function and by projecting onto the constraint set. The goal of this paper is to explore the effects of stochastic subgradient errors on the convergence of the algorithm. We first consider the behavior of the algorithm in mean, and then the convergence with probability 1 and in mean square. We consider general stochastic errors that have uniformly bounded second moments and obtain bounds on the limiting performance of the algorithm in mean for diminishing and non-diminishing stepsizes. When the means of the errors diminish, we prove that there is mean consensus between the agents and mean convergence to the optimum function value for diminishing stepsizes. When the mean errors diminish sufficiently fast, we strengthen the results to consensus and convergence of the iterates to an optimal solution with probability 1 and in mean square.

KW - Convex optimization

KW - Distributed algorithm

KW - Stochastic approximation

KW - Subgradient methods

UR - http://www.scopus.com/inward/record.url?scp=78049361018&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=78049361018&partnerID=8YFLogxK

U2 - 10.1007/s10957-010-9737-7

DO - 10.1007/s10957-010-9737-7

M3 - Article

AN - SCOPUS:78049361018

VL - 147

SP - 516

EP - 545

JO - Journal of Optimization Theory and Applications

JF - Journal of Optimization Theory and Applications

SN - 0022-3239

IS - 3

ER -