Online discrete optimization in social networks

Maxim Raginsky, Angelia Nedich

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We discuss collective decision-making and learning capabilities of social networks in the presence of uncertainty. We present a discrete-time decision-making model for a network of agents in an uncertain environment wherein no agent has a model of the environment evolution. The environment impact on the agent network is captured through a sequence of cost functions, where the costs are revealed to the agents after the agents' decision time. The costs include individual agent costs and local-interaction costs incurred by each agent and its neighbors in the social network. In this model, each agent has a default mixed strategy that stays fixed regardless of the state of the environment, and the agent must expend effort when deviating from this strategy in order to alleviate the impact of the uncertain costs coming from the environment. We construct decentralized agent strategies whereby each agent selects its strategy based only on its related costs and the decisions of its neighbors in the network. In this setting, we quantify social learning in terms of regret, which is given by the difference between the realized network performance over a given time horizon and the best performance that could have been achieved in hindsight by a fictitious centralized entity with full knowledge of the environment's evolution.

Original languageEnglish (US)
Title of host publication2014 American Control Conference, ACC 2014
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages3796-3801
Number of pages6
ISBN (Print)9781479932726
DOIs
StatePublished - 2014
Externally publishedYes
Event2014 American Control Conference, ACC 2014 - Portland, OR, United States
Duration: Jun 4 2014Jun 6 2014

Other

Other2014 American Control Conference, ACC 2014
CountryUnited States
CityPortland, OR
Period6/4/146/6/14

Fingerprint

Costs
Decision making
Network performance
Cost functions
Uncertainty

Keywords

  • Learning
  • Networked control systems
  • Optimization

ASJC Scopus subject areas

  • Electrical and Electronic Engineering

Cite this

Raginsky, M., & Nedich, A. (2014). Online discrete optimization in social networks. In 2014 American Control Conference, ACC 2014 (pp. 3796-3801). [6858819] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ACC.2014.6858819

Online discrete optimization in social networks. / Raginsky, Maxim; Nedich, Angelia.

2014 American Control Conference, ACC 2014. Institute of Electrical and Electronics Engineers Inc., 2014. p. 3796-3801 6858819.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Raginsky, M & Nedich, A 2014, Online discrete optimization in social networks. in 2014 American Control Conference, ACC 2014., 6858819, Institute of Electrical and Electronics Engineers Inc., pp. 3796-3801, 2014 American Control Conference, ACC 2014, Portland, OR, United States, 6/4/14. https://doi.org/10.1109/ACC.2014.6858819
Raginsky M, Nedich A. Online discrete optimization in social networks. In 2014 American Control Conference, ACC 2014. Institute of Electrical and Electronics Engineers Inc. 2014. p. 3796-3801. 6858819 https://doi.org/10.1109/ACC.2014.6858819
Raginsky, Maxim ; Nedich, Angelia. / Online discrete optimization in social networks. 2014 American Control Conference, ACC 2014. Institute of Electrical and Electronics Engineers Inc., 2014. pp. 3796-3801
@inproceedings{570522decbf548599df244921635a4b8,
title = "Online discrete optimization in social networks",
abstract = "We discuss collective decision-making and learning capabilities of social networks in the presence of uncertainty. We present a discrete-time decision-making model for a network of agents in an uncertain environment wherein no agent has a model of the environment evolution. The environment impact on the agent network is captured through a sequence of cost functions, where the costs are revealed to the agents after the agents' decision time. The costs include individual agent costs and local-interaction costs incurred by each agent and its neighbors in the social network. In this model, each agent has a default mixed strategy that stays fixed regardless of the state of the environment, and the agent must expend effort when deviating from this strategy in order to alleviate the impact of the uncertain costs coming from the environment. We construct decentralized agent strategies whereby each agent selects its strategy based only on its related costs and the decisions of its neighbors in the network. In this setting, we quantify social learning in terms of regret, which is given by the difference between the realized network performance over a given time horizon and the best performance that could have been achieved in hindsight by a fictitious centralized entity with full knowledge of the environment's evolution.",
keywords = "Learning, Networked control systems, Optimization",
author = "Maxim Raginsky and Angelia Nedich",
year = "2014",
doi = "10.1109/ACC.2014.6858819",
language = "English (US)",
isbn = "9781479932726",
pages = "3796--3801",
booktitle = "2014 American Control Conference, ACC 2014",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
address = "United States",

}

TY - GEN

T1 - Online discrete optimization in social networks

AU - Raginsky, Maxim

AU - Nedich, Angelia

PY - 2014

Y1 - 2014

N2 - We discuss collective decision-making and learning capabilities of social networks in the presence of uncertainty. We present a discrete-time decision-making model for a network of agents in an uncertain environment wherein no agent has a model of the environment evolution. The environment impact on the agent network is captured through a sequence of cost functions, where the costs are revealed to the agents after the agents' decision time. The costs include individual agent costs and local-interaction costs incurred by each agent and its neighbors in the social network. In this model, each agent has a default mixed strategy that stays fixed regardless of the state of the environment, and the agent must expend effort when deviating from this strategy in order to alleviate the impact of the uncertain costs coming from the environment. We construct decentralized agent strategies whereby each agent selects its strategy based only on its related costs and the decisions of its neighbors in the network. In this setting, we quantify social learning in terms of regret, which is given by the difference between the realized network performance over a given time horizon and the best performance that could have been achieved in hindsight by a fictitious centralized entity with full knowledge of the environment's evolution.

AB - We discuss collective decision-making and learning capabilities of social networks in the presence of uncertainty. We present a discrete-time decision-making model for a network of agents in an uncertain environment wherein no agent has a model of the environment evolution. The environment impact on the agent network is captured through a sequence of cost functions, where the costs are revealed to the agents after the agents' decision time. The costs include individual agent costs and local-interaction costs incurred by each agent and its neighbors in the social network. In this model, each agent has a default mixed strategy that stays fixed regardless of the state of the environment, and the agent must expend effort when deviating from this strategy in order to alleviate the impact of the uncertain costs coming from the environment. We construct decentralized agent strategies whereby each agent selects its strategy based only on its related costs and the decisions of its neighbors in the network. In this setting, we quantify social learning in terms of regret, which is given by the difference between the realized network performance over a given time horizon and the best performance that could have been achieved in hindsight by a fictitious centralized entity with full knowledge of the environment's evolution.

KW - Learning

KW - Networked control systems

KW - Optimization

UR - http://www.scopus.com/inward/record.url?scp=84905717335&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84905717335&partnerID=8YFLogxK

U2 - 10.1109/ACC.2014.6858819

DO - 10.1109/ACC.2014.6858819

M3 - Conference contribution

AN - SCOPUS:84905717335

SN - 9781479932726

SP - 3796

EP - 3801

BT - 2014 American Control Conference, ACC 2014

PB - Institute of Electrical and Electronics Engineers Inc.

ER -