A tutorial on distributed (non-Bayesian) learning: Problem, algorithms and results

Angelia Nedich, Alex Olshevsky, Cesar A. Uribe

Research output: Chapter in Book/Report/Conference proceedingConference contribution

19 Scopus citations

Abstract

We overview some results on distributed learning with focus on a family of recently proposed algorithms known as non-Bayesian social learning. We consider different approaches to the distributed learning problem and its algorithmic solutions for the case of finitely many hypotheses. The original centralized problem is discussed at first, and then followed by a generalization to the distributed setting. The results on convergence and convergence rate are presented for both asymptotic and finite time regimes. Various extensions are discussed such as those dealing with directed time-varying networks, Nesterov's acceleration technique and a continuum sets of hypothesis.

Original languageEnglish (US)
Title of host publication2016 IEEE 55th Conference on Decision and Control, CDC 2016
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages6795-6801
Number of pages7
ISBN (Electronic)9781509018376
DOIs
StatePublished - Dec 27 2016
Event55th IEEE Conference on Decision and Control, CDC 2016 - Las Vegas, United States
Duration: Dec 12 2016Dec 14 2016

Publication series

Name2016 IEEE 55th Conference on Decision and Control, CDC 2016

Other

Other55th IEEE Conference on Decision and Control, CDC 2016
Country/TerritoryUnited States
CityLas Vegas
Period12/12/1612/14/16

ASJC Scopus subject areas

  • Artificial Intelligence
  • Decision Sciences (miscellaneous)
  • Control and Optimization

Fingerprint

Dive into the research topics of 'A tutorial on distributed (non-Bayesian) learning: Problem, algorithms and results'. Together they form a unique fingerprint.

Cite this