Convergence of rule-of-thumb learning rules in social networks

Daron Acemoglu, Angelia Nedich, Asuman Ozdaglar

Research output: Chapter in Book/Report/Conference proceedingConference contribution

28 Citations (Scopus)

Abstract

We study the problem of dynamic learning by a social network of agents. Each agent receives a signal about an underlying state and communicates with a subset of agents (his neighbors) in each period. The network is connected. In contrast to the majority of existing learning models, we focus on the case where the underlying state is time-varying. We consider the following class of rule of thumb learning rules: at each period, each agent constructs his posterior as a weighted average of his prior, his signal and the information he receives from neighbors. The weights given to signals can vary over time and the weights given to neighbors can vary across agents. We distinguish between two subclasses: (1) constant weight rules; (2) diminishing weight rules. The latter reduces weights given to signals asymptotically to 0. Our main results characterize the asymptotic behavior of beliefs. We show that the general class of rules leads to unbiased estimates of the underlying state. When the underlying state has innovations with variance tending to zero asymptotically, we show that the diminishing weight rules ensure convergence in the mean-square sense. In contrast, when the underlying state has persistent innovations, constant weight rules enable us to characterize explicit bounds on the mean square error between an agent's belief and the underlying state as a function of the type of learning rule and signal structure.

Original languageEnglish (US)
Title of host publicationProceedings of the 47th IEEE Conference on Decision and Control, CDC 2008
Pages1714-1720
Number of pages7
DOIs
StatePublished - 2008
Externally publishedYes
Event47th IEEE Conference on Decision and Control, CDC 2008 - Cancun, Mexico
Duration: Dec 9 2008Dec 11 2008

Other

Other47th IEEE Conference on Decision and Control, CDC 2008
CountryMexico
CityCancun
Period12/9/0812/11/08

Fingerprint

Rule Learning
Social Networks
Innovation
Diminishing
Mean square error
Vary
Explicit Bounds
Weighted Average
Mean Square
Time-varying
Asymptotic Behavior
Subset
Zero
Estimate

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Modeling and Simulation
  • Control and Optimization

Cite this

Acemoglu, D., Nedich, A., & Ozdaglar, A. (2008). Convergence of rule-of-thumb learning rules in social networks. In Proceedings of the 47th IEEE Conference on Decision and Control, CDC 2008 (pp. 1714-1720). [4739167] https://doi.org/10.1109/CDC.2008.4739167

Convergence of rule-of-thumb learning rules in social networks. / Acemoglu, Daron; Nedich, Angelia; Ozdaglar, Asuman.

Proceedings of the 47th IEEE Conference on Decision and Control, CDC 2008. 2008. p. 1714-1720 4739167.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Acemoglu, D, Nedich, A & Ozdaglar, A 2008, Convergence of rule-of-thumb learning rules in social networks. in Proceedings of the 47th IEEE Conference on Decision and Control, CDC 2008., 4739167, pp. 1714-1720, 47th IEEE Conference on Decision and Control, CDC 2008, Cancun, Mexico, 12/9/08. https://doi.org/10.1109/CDC.2008.4739167
Acemoglu D, Nedich A, Ozdaglar A. Convergence of rule-of-thumb learning rules in social networks. In Proceedings of the 47th IEEE Conference on Decision and Control, CDC 2008. 2008. p. 1714-1720. 4739167 https://doi.org/10.1109/CDC.2008.4739167
Acemoglu, Daron ; Nedich, Angelia ; Ozdaglar, Asuman. / Convergence of rule-of-thumb learning rules in social networks. Proceedings of the 47th IEEE Conference on Decision and Control, CDC 2008. 2008. pp. 1714-1720
@inproceedings{088da1b4e78a46ceb98a9e0e222cf8aa,
title = "Convergence of rule-of-thumb learning rules in social networks",
abstract = "We study the problem of dynamic learning by a social network of agents. Each agent receives a signal about an underlying state and communicates with a subset of agents (his neighbors) in each period. The network is connected. In contrast to the majority of existing learning models, we focus on the case where the underlying state is time-varying. We consider the following class of rule of thumb learning rules: at each period, each agent constructs his posterior as a weighted average of his prior, his signal and the information he receives from neighbors. The weights given to signals can vary over time and the weights given to neighbors can vary across agents. We distinguish between two subclasses: (1) constant weight rules; (2) diminishing weight rules. The latter reduces weights given to signals asymptotically to 0. Our main results characterize the asymptotic behavior of beliefs. We show that the general class of rules leads to unbiased estimates of the underlying state. When the underlying state has innovations with variance tending to zero asymptotically, we show that the diminishing weight rules ensure convergence in the mean-square sense. In contrast, when the underlying state has persistent innovations, constant weight rules enable us to characterize explicit bounds on the mean square error between an agent's belief and the underlying state as a function of the type of learning rule and signal structure.",
author = "Daron Acemoglu and Angelia Nedich and Asuman Ozdaglar",
year = "2008",
doi = "10.1109/CDC.2008.4739167",
language = "English (US)",
isbn = "9781424431243",
pages = "1714--1720",
booktitle = "Proceedings of the 47th IEEE Conference on Decision and Control, CDC 2008",

}

TY - GEN

T1 - Convergence of rule-of-thumb learning rules in social networks

AU - Acemoglu, Daron

AU - Nedich, Angelia

AU - Ozdaglar, Asuman

PY - 2008

Y1 - 2008

N2 - We study the problem of dynamic learning by a social network of agents. Each agent receives a signal about an underlying state and communicates with a subset of agents (his neighbors) in each period. The network is connected. In contrast to the majority of existing learning models, we focus on the case where the underlying state is time-varying. We consider the following class of rule of thumb learning rules: at each period, each agent constructs his posterior as a weighted average of his prior, his signal and the information he receives from neighbors. The weights given to signals can vary over time and the weights given to neighbors can vary across agents. We distinguish between two subclasses: (1) constant weight rules; (2) diminishing weight rules. The latter reduces weights given to signals asymptotically to 0. Our main results characterize the asymptotic behavior of beliefs. We show that the general class of rules leads to unbiased estimates of the underlying state. When the underlying state has innovations with variance tending to zero asymptotically, we show that the diminishing weight rules ensure convergence in the mean-square sense. In contrast, when the underlying state has persistent innovations, constant weight rules enable us to characterize explicit bounds on the mean square error between an agent's belief and the underlying state as a function of the type of learning rule and signal structure.

AB - We study the problem of dynamic learning by a social network of agents. Each agent receives a signal about an underlying state and communicates with a subset of agents (his neighbors) in each period. The network is connected. In contrast to the majority of existing learning models, we focus on the case where the underlying state is time-varying. We consider the following class of rule of thumb learning rules: at each period, each agent constructs his posterior as a weighted average of his prior, his signal and the information he receives from neighbors. The weights given to signals can vary over time and the weights given to neighbors can vary across agents. We distinguish between two subclasses: (1) constant weight rules; (2) diminishing weight rules. The latter reduces weights given to signals asymptotically to 0. Our main results characterize the asymptotic behavior of beliefs. We show that the general class of rules leads to unbiased estimates of the underlying state. When the underlying state has innovations with variance tending to zero asymptotically, we show that the diminishing weight rules ensure convergence in the mean-square sense. In contrast, when the underlying state has persistent innovations, constant weight rules enable us to characterize explicit bounds on the mean square error between an agent's belief and the underlying state as a function of the type of learning rule and signal structure.

UR - http://www.scopus.com/inward/record.url?scp=62949168371&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=62949168371&partnerID=8YFLogxK

U2 - 10.1109/CDC.2008.4739167

DO - 10.1109/CDC.2008.4739167

M3 - Conference contribution

SN - 9781424431243

SP - 1714

EP - 1720

BT - Proceedings of the 47th IEEE Conference on Decision and Control, CDC 2008

ER -