Reinforcement learning meets minority game: Toward optimal resource allocation

Si Ping Zhang, Jia Qi Dong, Li Liu, Zi Gang Huang, Liang Huang, Ying-Cheng Lai

Research output: Contribution to journalArticle

2 Scopus citations

Abstract

The main point of this paper is to provide an affirmative answer through exploiting reinforcement learning (RL) in artificial intelligence (AI) for eliminating herding without any external control in complex resource allocation systems. In particular, we demonstrate that when agents are empowered with RL (e.g., the popular Q-learning algorithm in AI) in that they get familiar with the unknown game environment gradually and attempt to deliver the optimal actions to maximize the payoff, herding can effectively be eliminated. Furthermore, computations reveal the striking phenomenon that, regardless of the initial state, the system evolves persistently and relentlessly toward the optimal state in which all resources are used efficiently. However, the evolution process is not without interruptions: there are large fluctuations that occur but only intermittently in time. The statistical distribution of the time between two successive fluctuating events is found to depend on the parity of the evolution, i.e., whether the number of time steps in between is odd or even. We develop a physical analysis and derive mean-field equations to gain an understanding of these phenomena. Since AI is becoming increasingly widespread, we expect our RL empowered minority game system to have broad applications.

Original languageEnglish (US)
Article number032302
JournalPhysical Review E
Volume99
Issue number3
DOIs
StatePublished - Mar 6 2019

    Fingerprint

ASJC Scopus subject areas

  • Statistical and Nonlinear Physics
  • Statistics and Probability
  • Condensed Matter Physics

Cite this