Distributed learning with infinitely many hypotheses

Angelia Nedich, Alex Olshevsky, Cesar A. Uribe

Research output: Chapter in Book/Report/Conference proceedingConference contribution

4 Scopus citations

Abstract

We consider a distributed learning setup where a network of agents sequentially access realizations of a set of random variables with unknown distributions. The network objective is to find a parametrized distribution that best describes their joint observations in the sense of the Kullback-Leibler divergence. We analyze the case of countably many hypotheses and the case of a continuum of hypotheses. We provide non-asymptotic bounds for the concentration rate of the agents' beliefs around the correct hypothesis in terms of the number of agents, the network parameters, and the learning abilities of the agents. Additionally, we provide a novel motivation for a general set of distributed non-Bayesian update rules as instances of the distributed stochastic mirror descent algorithm.

Original languageEnglish (US)
Title of host publication2016 IEEE 55th Conference on Decision and Control, CDC 2016
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages6321-6326
Number of pages6
ISBN (Electronic)9781509018376
DOIs
StatePublished - Dec 27 2016
Event55th IEEE Conference on Decision and Control, CDC 2016 - Las Vegas, United States
Duration: Dec 12 2016Dec 14 2016

Publication series

Name2016 IEEE 55th Conference on Decision and Control, CDC 2016

Other

Other55th IEEE Conference on Decision and Control, CDC 2016
Country/TerritoryUnited States
CityLas Vegas
Period12/12/1612/14/16

ASJC Scopus subject areas

  • Artificial Intelligence
  • Decision Sciences (miscellaneous)
  • Control and Optimization

Fingerprint

Dive into the research topics of 'Distributed learning with infinitely many hypotheses'. Together they form a unique fingerprint.

Cite this