Fast Convergence Rates for Distributed Non-Bayesian Learning

Angelia Nedich, Alex Olshevsky, Cesar A. Uribe

Research output: Contribution to journalArticle

23 Citations (Scopus)

Abstract

We consider the problem of distributed learning, where a network of agents collectively aim to agree on a hypothesis that best explains a set of distributed observations of conditionally independent random processes. We propose a distributed algorithm and establish consistency, as well as a nonasymptotic, explicit, and geometric convergence rate for the concentration of the beliefs around the set of optimal hypotheses. Additionally, if the agents interact over static networks, we provide an improved learning protocol with better scalability with respect to the number of nodes in the network.

Original languageEnglish (US)
Article number7891016
Pages (from-to)5538-5553
Number of pages16
JournalIEEE Transactions on Automatic Control
Volume62
Issue number11
DOIs
StatePublished - Nov 1 2017

Fingerprint

Random processes
Parallel algorithms
Scalability
Network protocols

Keywords

  • Algorithm design and analysis
  • Bayes methods
  • distributed algorithms
  • estimation
  • learning

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Computer Science Applications
  • Electrical and Electronic Engineering

Cite this

Fast Convergence Rates for Distributed Non-Bayesian Learning. / Nedich, Angelia; Olshevsky, Alex; Uribe, Cesar A.

In: IEEE Transactions on Automatic Control, Vol. 62, No. 11, 7891016, 01.11.2017, p. 5538-5553.

Research output: Contribution to journalArticle

Nedich, Angelia ; Olshevsky, Alex ; Uribe, Cesar A. / Fast Convergence Rates for Distributed Non-Bayesian Learning. In: IEEE Transactions on Automatic Control. 2017 ; Vol. 62, No. 11. pp. 5538-5553.
@article{cff0aa2ba5254581af5126e9ac9741fb,
title = "Fast Convergence Rates for Distributed Non-Bayesian Learning",
abstract = "We consider the problem of distributed learning, where a network of agents collectively aim to agree on a hypothesis that best explains a set of distributed observations of conditionally independent random processes. We propose a distributed algorithm and establish consistency, as well as a nonasymptotic, explicit, and geometric convergence rate for the concentration of the beliefs around the set of optimal hypotheses. Additionally, if the agents interact over static networks, we provide an improved learning protocol with better scalability with respect to the number of nodes in the network.",
keywords = "Algorithm design and analysis, Bayes methods, distributed algorithms, estimation, learning",
author = "Angelia Nedich and Alex Olshevsky and Uribe, {Cesar A.}",
year = "2017",
month = "11",
day = "1",
doi = "10.1109/TAC.2017.2690401",
language = "English (US)",
volume = "62",
pages = "5538--5553",
journal = "IEEE Transactions on Automatic Control",
issn = "0018-9286",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "11",

}

TY - JOUR

T1 - Fast Convergence Rates for Distributed Non-Bayesian Learning

AU - Nedich, Angelia

AU - Olshevsky, Alex

AU - Uribe, Cesar A.

PY - 2017/11/1

Y1 - 2017/11/1

N2 - We consider the problem of distributed learning, where a network of agents collectively aim to agree on a hypothesis that best explains a set of distributed observations of conditionally independent random processes. We propose a distributed algorithm and establish consistency, as well as a nonasymptotic, explicit, and geometric convergence rate for the concentration of the beliefs around the set of optimal hypotheses. Additionally, if the agents interact over static networks, we provide an improved learning protocol with better scalability with respect to the number of nodes in the network.

AB - We consider the problem of distributed learning, where a network of agents collectively aim to agree on a hypothesis that best explains a set of distributed observations of conditionally independent random processes. We propose a distributed algorithm and establish consistency, as well as a nonasymptotic, explicit, and geometric convergence rate for the concentration of the beliefs around the set of optimal hypotheses. Additionally, if the agents interact over static networks, we provide an improved learning protocol with better scalability with respect to the number of nodes in the network.

KW - Algorithm design and analysis

KW - Bayes methods

KW - distributed algorithms

KW - estimation

KW - learning

UR - http://www.scopus.com/inward/record.url?scp=85036456736&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85036456736&partnerID=8YFLogxK

U2 - 10.1109/TAC.2017.2690401

DO - 10.1109/TAC.2017.2690401

M3 - Article

AN - SCOPUS:85036456736

VL - 62

SP - 5538

EP - 5553

JO - IEEE Transactions on Automatic Control

JF - IEEE Transactions on Automatic Control

SN - 0018-9286

IS - 11

M1 - 7891016

ER -