Feature-based aggregation and deep reinforcement learning: A survey and some new implementations

Research output: Contribution to journalArticlepeer-review

83 Scopus citations

Abstract

In this paper we discuss policy iteration methods for approximate solution of a finite-state discounted Markov decision problem, with a focus on feature-based aggregation methods and their connection with deep reinforcement learning schemes. We introduce features of the states of the original problem, and we formulate a smaller aggregate Markov decision problem, whose states relate to the features. We discuss properties and possible implementations of this type of aggregation, including a new approach to approximate policy iteration. In this approach the policy improvement operation combines feature-based aggregation with feature construction using deep neural networks or other calculations. We argue that the cost function of a policy may be approximated much more accurately by the nonlinear function of the features provided by aggregation, than by the linear function of the features provided by neural network-based reinforcement learning, thereby potentially leading to more effective policy improvement.

Original languageEnglish (US)
Article number8476633
Pages (from-to)1-31
Number of pages31
JournalIEEE/CAA Journal of Automatica Sinica
Volume6
Issue number1
DOIs
StatePublished - Jan 2019
Externally publishedYes

Keywords

  • Aggregation
  • Deep neural networks
  • Dynamic programming
  • Feature-based architectures
  • Markovian decision problems
  • Policy iteration
  • Reinforcement learning
  • Rollout algorithms

ASJC Scopus subject areas

  • Control and Optimization
  • Artificial Intelligence
  • Information Systems
  • Control and Systems Engineering

Fingerprint

Dive into the research topics of 'Feature-based aggregation and deep reinforcement learning: A survey and some new implementations'. Together they form a unique fingerprint.

Cite this