### Abstract

We study the problem of dynamic learning by a social network of agents. Each agent receives a signal about an underlying state and communicates with a subset of agents (his neighbors) in each period. The network is connected. In contrast to the majority of existing learning models, we focus on the case where the underlying state is time-varying. We consider the following class of rule of thumb learning rules: at each period, each agent constructs his posterior as a weighted average of his prior, his signal and the information he receives from neighbors. The weights given to signals can vary over time and the weights given to neighbors can vary across agents. We distinguish between two subclasses: (1) constant weight rules; (2) diminishing weight rules. The latter reduces weights given to signals asymptotically to 0. Our main results characterize the asymptotic behavior of beliefs. We show that the general class of rules leads to unbiased estimates of the underlying state. When the underlying state has innovations with variance tending to zero asymptotically, we show that the diminishing weight rules ensure convergence in the mean-square sense. In contrast, when the underlying state has persistent innovations, constant weight rules enable us to characterize explicit bounds on the mean square error between an agent's belief and the underlying state as a function of the type of learning rule and signal structure.

Original language | English (US) |
---|---|

Title of host publication | Proceedings of the 47th IEEE Conference on Decision and Control, CDC 2008 |

Pages | 1714-1720 |

Number of pages | 7 |

DOIs | |

State | Published - 2008 |

Externally published | Yes |

Event | 47th IEEE Conference on Decision and Control, CDC 2008 - Cancun, Mexico Duration: Dec 9 2008 → Dec 11 2008 |

### Other

Other | 47th IEEE Conference on Decision and Control, CDC 2008 |
---|---|

Country | Mexico |

City | Cancun |

Period | 12/9/08 → 12/11/08 |

### Fingerprint

### ASJC Scopus subject areas

- Control and Systems Engineering
- Modeling and Simulation
- Control and Optimization

### Cite this

*Proceedings of the 47th IEEE Conference on Decision and Control, CDC 2008*(pp. 1714-1720). [4739167] https://doi.org/10.1109/CDC.2008.4739167

**Convergence of rule-of-thumb learning rules in social networks.** / Acemoglu, Daron; Nedich, Angelia; Ozdaglar, Asuman.

Research output: Chapter in Book/Report/Conference proceeding › Conference contribution

*Proceedings of the 47th IEEE Conference on Decision and Control, CDC 2008.*, 4739167, pp. 1714-1720, 47th IEEE Conference on Decision and Control, CDC 2008, Cancun, Mexico, 12/9/08. https://doi.org/10.1109/CDC.2008.4739167

}

TY - GEN

T1 - Convergence of rule-of-thumb learning rules in social networks

AU - Acemoglu, Daron

AU - Nedich, Angelia

AU - Ozdaglar, Asuman

PY - 2008

Y1 - 2008

N2 - We study the problem of dynamic learning by a social network of agents. Each agent receives a signal about an underlying state and communicates with a subset of agents (his neighbors) in each period. The network is connected. In contrast to the majority of existing learning models, we focus on the case where the underlying state is time-varying. We consider the following class of rule of thumb learning rules: at each period, each agent constructs his posterior as a weighted average of his prior, his signal and the information he receives from neighbors. The weights given to signals can vary over time and the weights given to neighbors can vary across agents. We distinguish between two subclasses: (1) constant weight rules; (2) diminishing weight rules. The latter reduces weights given to signals asymptotically to 0. Our main results characterize the asymptotic behavior of beliefs. We show that the general class of rules leads to unbiased estimates of the underlying state. When the underlying state has innovations with variance tending to zero asymptotically, we show that the diminishing weight rules ensure convergence in the mean-square sense. In contrast, when the underlying state has persistent innovations, constant weight rules enable us to characterize explicit bounds on the mean square error between an agent's belief and the underlying state as a function of the type of learning rule and signal structure.

AB - We study the problem of dynamic learning by a social network of agents. Each agent receives a signal about an underlying state and communicates with a subset of agents (his neighbors) in each period. The network is connected. In contrast to the majority of existing learning models, we focus on the case where the underlying state is time-varying. We consider the following class of rule of thumb learning rules: at each period, each agent constructs his posterior as a weighted average of his prior, his signal and the information he receives from neighbors. The weights given to signals can vary over time and the weights given to neighbors can vary across agents. We distinguish between two subclasses: (1) constant weight rules; (2) diminishing weight rules. The latter reduces weights given to signals asymptotically to 0. Our main results characterize the asymptotic behavior of beliefs. We show that the general class of rules leads to unbiased estimates of the underlying state. When the underlying state has innovations with variance tending to zero asymptotically, we show that the diminishing weight rules ensure convergence in the mean-square sense. In contrast, when the underlying state has persistent innovations, constant weight rules enable us to characterize explicit bounds on the mean square error between an agent's belief and the underlying state as a function of the type of learning rule and signal structure.

UR - http://www.scopus.com/inward/record.url?scp=62949168371&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=62949168371&partnerID=8YFLogxK

U2 - 10.1109/CDC.2008.4739167

DO - 10.1109/CDC.2008.4739167

M3 - Conference contribution

SN - 9781424431243

SP - 1714

EP - 1720

BT - Proceedings of the 47th IEEE Conference on Decision and Control, CDC 2008

ER -