### Abstract

We consider a distributed multi-agent network system where the goal is to minimize the sum of convex functions, each of which is known (with stochastic errors) to a specific network agent. We are interested in asynchronous algorithms for solving the problem over a connected network where the communications among the agent are random. At each time, a random set of agents communicate and update their information. When updating, an agent uses the (sub)gradient of its individual objective function and its own stepsize value. The algorithm is completely asynchronous as it neither requires the coordination of agent actions nor the coordination of the stepsize values. We investigate the asymptotic error bounds of the algorithm with a constant stepsize for strongly convex and just convex functions. Our error bounds capture the effects of agent stepsize choices and the structure of the agent connectivity graph. The error bound scales at best as m in the number m of agents when the agent objective functions are strongly convex.

Original language | English (US) |
---|---|

Title of host publication | 2010 Information Theory and Applications Workshop, ITA 2010 - Conference Proceedings |

Pages | 342-351 |

Number of pages | 10 |

DOIs | |

State | Published - 2010 |

Externally published | Yes |

Event | 2010 Information Theory and Applications Workshop, ITA 2010 - San Diego, CA, United States Duration: Jan 31 2010 → Feb 5 2010 |

### Other

Other | 2010 Information Theory and Applications Workshop, ITA 2010 |
---|---|

Country | United States |

City | San Diego, CA |

Period | 1/31/10 → 2/5/10 |

### Fingerprint

### Keywords

- Algorithms
- Asynchronous algorithms
- Convex optimization
- Networked system
- Random consensus
- Stochastic

### ASJC Scopus subject areas

- Computer Science Applications
- Information Systems

### Cite this

*2010 Information Theory and Applications Workshop, ITA 2010 - Conference Proceedings*(pp. 342-351). [5454103] https://doi.org/10.1109/ITA.2010.5454103

**Asynchronous stochastic convex optimization over random networks : Error bounds.** / Touri, B.; Nedich, Angelia; Ram, S. Sundhar.

Research output: Chapter in Book/Report/Conference proceeding › Conference contribution

*2010 Information Theory and Applications Workshop, ITA 2010 - Conference Proceedings.*, 5454103, pp. 342-351, 2010 Information Theory and Applications Workshop, ITA 2010, San Diego, CA, United States, 1/31/10. https://doi.org/10.1109/ITA.2010.5454103

}

TY - GEN

T1 - Asynchronous stochastic convex optimization over random networks

T2 - Error bounds

AU - Touri, B.

AU - Nedich, Angelia

AU - Ram, S. Sundhar

PY - 2010

Y1 - 2010

N2 - We consider a distributed multi-agent network system where the goal is to minimize the sum of convex functions, each of which is known (with stochastic errors) to a specific network agent. We are interested in asynchronous algorithms for solving the problem over a connected network where the communications among the agent are random. At each time, a random set of agents communicate and update their information. When updating, an agent uses the (sub)gradient of its individual objective function and its own stepsize value. The algorithm is completely asynchronous as it neither requires the coordination of agent actions nor the coordination of the stepsize values. We investigate the asymptotic error bounds of the algorithm with a constant stepsize for strongly convex and just convex functions. Our error bounds capture the effects of agent stepsize choices and the structure of the agent connectivity graph. The error bound scales at best as m in the number m of agents when the agent objective functions are strongly convex.

AB - We consider a distributed multi-agent network system where the goal is to minimize the sum of convex functions, each of which is known (with stochastic errors) to a specific network agent. We are interested in asynchronous algorithms for solving the problem over a connected network where the communications among the agent are random. At each time, a random set of agents communicate and update their information. When updating, an agent uses the (sub)gradient of its individual objective function and its own stepsize value. The algorithm is completely asynchronous as it neither requires the coordination of agent actions nor the coordination of the stepsize values. We investigate the asymptotic error bounds of the algorithm with a constant stepsize for strongly convex and just convex functions. Our error bounds capture the effects of agent stepsize choices and the structure of the agent connectivity graph. The error bound scales at best as m in the number m of agents when the agent objective functions are strongly convex.

KW - Algorithms

KW - Asynchronous algorithms

KW - Convex optimization

KW - Networked system

KW - Random consensus

KW - Stochastic

UR - http://www.scopus.com/inward/record.url?scp=77952691286&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=77952691286&partnerID=8YFLogxK

U2 - 10.1109/ITA.2010.5454103

DO - 10.1109/ITA.2010.5454103

M3 - Conference contribution

SN - 9781424470143

SP - 342

EP - 351

BT - 2010 Information Theory and Applications Workshop, ITA 2010 - Conference Proceedings

ER -