TY - GEN
T1 - Online discrete optimization in social networks
AU - Raginsky, Maxim
AU - Nedić, Angelia
PY - 2014/1/1
Y1 - 2014/1/1
N2 - We discuss collective decision-making and learning capabilities of social networks in the presence of uncertainty. We present a discrete-time decision-making model for a network of agents in an uncertain environment wherein no agent has a model of the environment evolution. The environment impact on the agent network is captured through a sequence of cost functions, where the costs are revealed to the agents after the agents' decision time. The costs include individual agent costs and local-interaction costs incurred by each agent and its neighbors in the social network. In this model, each agent has a default mixed strategy that stays fixed regardless of the state of the environment, and the agent must expend effort when deviating from this strategy in order to alleviate the impact of the uncertain costs coming from the environment. We construct decentralized agent strategies whereby each agent selects its strategy based only on its related costs and the decisions of its neighbors in the network. In this setting, we quantify social learning in terms of regret, which is given by the difference between the realized network performance over a given time horizon and the best performance that could have been achieved in hindsight by a fictitious centralized entity with full knowledge of the environment's evolution.
AB - We discuss collective decision-making and learning capabilities of social networks in the presence of uncertainty. We present a discrete-time decision-making model for a network of agents in an uncertain environment wherein no agent has a model of the environment evolution. The environment impact on the agent network is captured through a sequence of cost functions, where the costs are revealed to the agents after the agents' decision time. The costs include individual agent costs and local-interaction costs incurred by each agent and its neighbors in the social network. In this model, each agent has a default mixed strategy that stays fixed regardless of the state of the environment, and the agent must expend effort when deviating from this strategy in order to alleviate the impact of the uncertain costs coming from the environment. We construct decentralized agent strategies whereby each agent selects its strategy based only on its related costs and the decisions of its neighbors in the network. In this setting, we quantify social learning in terms of regret, which is given by the difference between the realized network performance over a given time horizon and the best performance that could have been achieved in hindsight by a fictitious centralized entity with full knowledge of the environment's evolution.
KW - Learning
KW - Networked control systems
KW - Optimization
UR - http://www.scopus.com/inward/record.url?scp=84905717335&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84905717335&partnerID=8YFLogxK
U2 - 10.1109/ACC.2014.6858819
DO - 10.1109/ACC.2014.6858819
M3 - Conference contribution
AN - SCOPUS:84905717335
SN - 9781479932726
T3 - Proceedings of the American Control Conference
SP - 3796
EP - 3801
BT - 2014 American Control Conference, ACC 2014
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2014 American Control Conference, ACC 2014
Y2 - 4 June 2014 through 6 June 2014
ER -