Distributed learning with infinitely many hypotheses

Angelia Nedich, Alex Olshevsky, Cesar A. Uribe

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

We consider a distributed learning setup where a network of agents sequentially access realizations of a set of random variables with unknown distributions. The network objective is to find a parametrized distribution that best describes their joint observations in the sense of the Kullback-Leibler divergence. We analyze the case of countably many hypotheses and the case of a continuum of hypotheses. We provide non-asymptotic bounds for the concentration rate of the agents' beliefs around the correct hypothesis in terms of the number of agents, the network parameters, and the learning abilities of the agents. Additionally, we provide a novel motivation for a general set of distributed non-Bayesian update rules as instances of the distributed stochastic mirror descent algorithm.

Original languageEnglish (US)
Title of host publication2016 IEEE 55th Conference on Decision and Control, CDC 2016
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages6321-6326
Number of pages6
ISBN (Electronic)9781509018376
DOIs
StatePublished - Dec 27 2016
Event55th IEEE Conference on Decision and Control, CDC 2016 - Las Vegas, United States
Duration: Dec 12 2016Dec 14 2016

Other

Other55th IEEE Conference on Decision and Control, CDC 2016
CountryUnited States
CityLas Vegas
Period12/12/1612/14/16

Fingerprint

Random variables
Kullback-Leibler Divergence
Descent Algorithm
Mirrors
Mirror
Continuum
Random variable
Update
Unknown
Learning
Beliefs
Observation
Kullback-Leibler divergence

ASJC Scopus subject areas

  • Artificial Intelligence
  • Decision Sciences (miscellaneous)
  • Control and Optimization

Cite this

Nedich, A., Olshevsky, A., & Uribe, C. A. (2016). Distributed learning with infinitely many hypotheses. In 2016 IEEE 55th Conference on Decision and Control, CDC 2016 (pp. 6321-6326). [7799242] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/CDC.2016.7799242

Distributed learning with infinitely many hypotheses. / Nedich, Angelia; Olshevsky, Alex; Uribe, Cesar A.

2016 IEEE 55th Conference on Decision and Control, CDC 2016. Institute of Electrical and Electronics Engineers Inc., 2016. p. 6321-6326 7799242.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Nedich, A, Olshevsky, A & Uribe, CA 2016, Distributed learning with infinitely many hypotheses. in 2016 IEEE 55th Conference on Decision and Control, CDC 2016., 7799242, Institute of Electrical and Electronics Engineers Inc., pp. 6321-6326, 55th IEEE Conference on Decision and Control, CDC 2016, Las Vegas, United States, 12/12/16. https://doi.org/10.1109/CDC.2016.7799242
Nedich A, Olshevsky A, Uribe CA. Distributed learning with infinitely many hypotheses. In 2016 IEEE 55th Conference on Decision and Control, CDC 2016. Institute of Electrical and Electronics Engineers Inc. 2016. p. 6321-6326. 7799242 https://doi.org/10.1109/CDC.2016.7799242
Nedich, Angelia ; Olshevsky, Alex ; Uribe, Cesar A. / Distributed learning with infinitely many hypotheses. 2016 IEEE 55th Conference on Decision and Control, CDC 2016. Institute of Electrical and Electronics Engineers Inc., 2016. pp. 6321-6326
@inproceedings{5ba9b896bf904ea6b1263e1c6d83ade0,
title = "Distributed learning with infinitely many hypotheses",
abstract = "We consider a distributed learning setup where a network of agents sequentially access realizations of a set of random variables with unknown distributions. The network objective is to find a parametrized distribution that best describes their joint observations in the sense of the Kullback-Leibler divergence. We analyze the case of countably many hypotheses and the case of a continuum of hypotheses. We provide non-asymptotic bounds for the concentration rate of the agents' beliefs around the correct hypothesis in terms of the number of agents, the network parameters, and the learning abilities of the agents. Additionally, we provide a novel motivation for a general set of distributed non-Bayesian update rules as instances of the distributed stochastic mirror descent algorithm.",
author = "Angelia Nedich and Alex Olshevsky and Uribe, {Cesar A.}",
year = "2016",
month = "12",
day = "27",
doi = "10.1109/CDC.2016.7799242",
language = "English (US)",
pages = "6321--6326",
booktitle = "2016 IEEE 55th Conference on Decision and Control, CDC 2016",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
address = "United States",

}

TY - GEN

T1 - Distributed learning with infinitely many hypotheses

AU - Nedich, Angelia

AU - Olshevsky, Alex

AU - Uribe, Cesar A.

PY - 2016/12/27

Y1 - 2016/12/27

N2 - We consider a distributed learning setup where a network of agents sequentially access realizations of a set of random variables with unknown distributions. The network objective is to find a parametrized distribution that best describes their joint observations in the sense of the Kullback-Leibler divergence. We analyze the case of countably many hypotheses and the case of a continuum of hypotheses. We provide non-asymptotic bounds for the concentration rate of the agents' beliefs around the correct hypothesis in terms of the number of agents, the network parameters, and the learning abilities of the agents. Additionally, we provide a novel motivation for a general set of distributed non-Bayesian update rules as instances of the distributed stochastic mirror descent algorithm.

AB - We consider a distributed learning setup where a network of agents sequentially access realizations of a set of random variables with unknown distributions. The network objective is to find a parametrized distribution that best describes their joint observations in the sense of the Kullback-Leibler divergence. We analyze the case of countably many hypotheses and the case of a continuum of hypotheses. We provide non-asymptotic bounds for the concentration rate of the agents' beliefs around the correct hypothesis in terms of the number of agents, the network parameters, and the learning abilities of the agents. Additionally, we provide a novel motivation for a general set of distributed non-Bayesian update rules as instances of the distributed stochastic mirror descent algorithm.

UR - http://www.scopus.com/inward/record.url?scp=85010817753&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85010817753&partnerID=8YFLogxK

U2 - 10.1109/CDC.2016.7799242

DO - 10.1109/CDC.2016.7799242

M3 - Conference contribution

AN - SCOPUS:85010817753

SP - 6321

EP - 6326

BT - 2016 IEEE 55th Conference on Decision and Control, CDC 2016

PB - Institute of Electrical and Electronics Engineers Inc.

ER -