Distributed stochastic optimization under imperfect information

Aswin Kannan, Angelia Nedich, Uday V. Shanbhag

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

We consider a stochastic convex optimization problem that requires minimizing a sum of misspecified agent-specific expectation-valued convex functions over the intersection of a collection of agent-specific convex sets. This misspecification is manifested in a parametric sense and may be resolved through solving a distinct stochastic convex learning problem. Our interest lies in the development of distributed algorithms in which every agent makes decisions based on the knowledge of its objective and feasibility set while learning the decisions of other agents by communicating with its local neighbors over a time-varying connectivity graph. While a significant body of research currently exists in the context of such problems, we believe that the misspecified generalization of this problem is both important and has seen little study, if at all. Accordingly, our focus lies on the simultaneous resolution of both problems through a joint set of schemes that combine three distinct steps: (i) An alignment step in which every agent updates its current belief by averaging over the beliefs of its neighbors; (ii) A projected (stochastic) gradient step in which every agent further updates this averaged estimate; and (iii) A learning step in which agents update their belief of the misspecified parameter by utilizing a stochastic gradient step. Under an assumption of mere convexity on agent objectives and strong convexity of the learning problems, we show that the sequences generated by this collection of update rules converge almost surely to the solution of the correctly specified stochastic convex optimization problem and the stochastic learning problem, respectively.

Original languageEnglish (US)
Title of host publication2015 54th IEEE Conference on Decision and Control, CDC 2015
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages400-405
Number of pages6
Volume2016-February
ISBN (Electronic)9781479978861
DOIs
StatePublished - Feb 8 2016
Externally publishedYes
Event54th IEEE Conference on Decision and Control, CDC 2015 - Osaka, Japan
Duration: Dec 15 2015Dec 18 2015

Other

Other54th IEEE Conference on Decision and Control, CDC 2015
CountryJapan
CityOsaka
Period12/15/1512/18/15

Fingerprint

Distributed Optimization
Stochastic Optimization
Imperfect
Update
Stochastic Gradient
Convex optimization
Convex Optimization
Convexity
Projected Gradient
Graph Connectivity
Optimization Problem
Distinct
Misspecification
Parallel algorithms
Distributed Algorithms
Convex Sets
Convex function
Averaging
Time-varying
Alignment

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Modeling and Simulation
  • Control and Optimization

Cite this

Kannan, A., Nedich, A., & Shanbhag, U. V. (2016). Distributed stochastic optimization under imperfect information. In 2015 54th IEEE Conference on Decision and Control, CDC 2015 (Vol. 2016-February, pp. 400-405). [7402233] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/CDC.2015.7402233

Distributed stochastic optimization under imperfect information. / Kannan, Aswin; Nedich, Angelia; Shanbhag, Uday V.

2015 54th IEEE Conference on Decision and Control, CDC 2015. Vol. 2016-February Institute of Electrical and Electronics Engineers Inc., 2016. p. 400-405 7402233.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Kannan, A, Nedich, A & Shanbhag, UV 2016, Distributed stochastic optimization under imperfect information. in 2015 54th IEEE Conference on Decision and Control, CDC 2015. vol. 2016-February, 7402233, Institute of Electrical and Electronics Engineers Inc., pp. 400-405, 54th IEEE Conference on Decision and Control, CDC 2015, Osaka, Japan, 12/15/15. https://doi.org/10.1109/CDC.2015.7402233
Kannan A, Nedich A, Shanbhag UV. Distributed stochastic optimization under imperfect information. In 2015 54th IEEE Conference on Decision and Control, CDC 2015. Vol. 2016-February. Institute of Electrical and Electronics Engineers Inc. 2016. p. 400-405. 7402233 https://doi.org/10.1109/CDC.2015.7402233
Kannan, Aswin ; Nedich, Angelia ; Shanbhag, Uday V. / Distributed stochastic optimization under imperfect information. 2015 54th IEEE Conference on Decision and Control, CDC 2015. Vol. 2016-February Institute of Electrical and Electronics Engineers Inc., 2016. pp. 400-405
@inproceedings{5410765daf0e442bb13d9314299f6018,
title = "Distributed stochastic optimization under imperfect information",
abstract = "We consider a stochastic convex optimization problem that requires minimizing a sum of misspecified agent-specific expectation-valued convex functions over the intersection of a collection of agent-specific convex sets. This misspecification is manifested in a parametric sense and may be resolved through solving a distinct stochastic convex learning problem. Our interest lies in the development of distributed algorithms in which every agent makes decisions based on the knowledge of its objective and feasibility set while learning the decisions of other agents by communicating with its local neighbors over a time-varying connectivity graph. While a significant body of research currently exists in the context of such problems, we believe that the misspecified generalization of this problem is both important and has seen little study, if at all. Accordingly, our focus lies on the simultaneous resolution of both problems through a joint set of schemes that combine three distinct steps: (i) An alignment step in which every agent updates its current belief by averaging over the beliefs of its neighbors; (ii) A projected (stochastic) gradient step in which every agent further updates this averaged estimate; and (iii) A learning step in which agents update their belief of the misspecified parameter by utilizing a stochastic gradient step. Under an assumption of mere convexity on agent objectives and strong convexity of the learning problems, we show that the sequences generated by this collection of update rules converge almost surely to the solution of the correctly specified stochastic convex optimization problem and the stochastic learning problem, respectively.",
author = "Aswin Kannan and Angelia Nedich and Shanbhag, {Uday V.}",
year = "2016",
month = "2",
day = "8",
doi = "10.1109/CDC.2015.7402233",
language = "English (US)",
volume = "2016-February",
pages = "400--405",
booktitle = "2015 54th IEEE Conference on Decision and Control, CDC 2015",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
address = "United States",

}

TY - GEN

T1 - Distributed stochastic optimization under imperfect information

AU - Kannan, Aswin

AU - Nedich, Angelia

AU - Shanbhag, Uday V.

PY - 2016/2/8

Y1 - 2016/2/8

N2 - We consider a stochastic convex optimization problem that requires minimizing a sum of misspecified agent-specific expectation-valued convex functions over the intersection of a collection of agent-specific convex sets. This misspecification is manifested in a parametric sense and may be resolved through solving a distinct stochastic convex learning problem. Our interest lies in the development of distributed algorithms in which every agent makes decisions based on the knowledge of its objective and feasibility set while learning the decisions of other agents by communicating with its local neighbors over a time-varying connectivity graph. While a significant body of research currently exists in the context of such problems, we believe that the misspecified generalization of this problem is both important and has seen little study, if at all. Accordingly, our focus lies on the simultaneous resolution of both problems through a joint set of schemes that combine three distinct steps: (i) An alignment step in which every agent updates its current belief by averaging over the beliefs of its neighbors; (ii) A projected (stochastic) gradient step in which every agent further updates this averaged estimate; and (iii) A learning step in which agents update their belief of the misspecified parameter by utilizing a stochastic gradient step. Under an assumption of mere convexity on agent objectives and strong convexity of the learning problems, we show that the sequences generated by this collection of update rules converge almost surely to the solution of the correctly specified stochastic convex optimization problem and the stochastic learning problem, respectively.

AB - We consider a stochastic convex optimization problem that requires minimizing a sum of misspecified agent-specific expectation-valued convex functions over the intersection of a collection of agent-specific convex sets. This misspecification is manifested in a parametric sense and may be resolved through solving a distinct stochastic convex learning problem. Our interest lies in the development of distributed algorithms in which every agent makes decisions based on the knowledge of its objective and feasibility set while learning the decisions of other agents by communicating with its local neighbors over a time-varying connectivity graph. While a significant body of research currently exists in the context of such problems, we believe that the misspecified generalization of this problem is both important and has seen little study, if at all. Accordingly, our focus lies on the simultaneous resolution of both problems through a joint set of schemes that combine three distinct steps: (i) An alignment step in which every agent updates its current belief by averaging over the beliefs of its neighbors; (ii) A projected (stochastic) gradient step in which every agent further updates this averaged estimate; and (iii) A learning step in which agents update their belief of the misspecified parameter by utilizing a stochastic gradient step. Under an assumption of mere convexity on agent objectives and strong convexity of the learning problems, we show that the sequences generated by this collection of update rules converge almost surely to the solution of the correctly specified stochastic convex optimization problem and the stochastic learning problem, respectively.

UR - http://www.scopus.com/inward/record.url?scp=84962027120&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84962027120&partnerID=8YFLogxK

U2 - 10.1109/CDC.2015.7402233

DO - 10.1109/CDC.2015.7402233

M3 - Conference contribution

VL - 2016-February

SP - 400

EP - 405

BT - 2015 54th IEEE Conference on Decision and Control, CDC 2015

PB - Institute of Electrical and Electronics Engineers Inc.

ER -