Abstract
In this paper we discuss policy iteration methods for approximate solution of a finite-state discounted Markov decision problem, with a focus on feature-based aggregation methods and their connection with deep reinforcement learning schemes. We introduce features of the states of the original problem, and we formulate a smaller aggregate Markov decision problem, whose states relate to the features. We discuss properties and possible implementations of this type of aggregation, including a new approach to approximate policy iteration. In this approach the policy improvement operation combines feature-based aggregation with feature construction using deep neural networks or other calculations. We argue that the cost function of a policy may be approximated much more accurately by the nonlinear function of the features provided by aggregation, than by the linear function of the features provided by neural network-based reinforcement learning, thereby potentially leading to more effective policy improvement.
Original language | English (US) |
---|---|
Article number | 8476633 |
Pages (from-to) | 1-31 |
Number of pages | 31 |
Journal | IEEE/CAA Journal of Automatica Sinica |
Volume | 6 |
Issue number | 1 |
DOIs | |
State | Published - Jan 2019 |
Externally published | Yes |
Keywords
- Aggregation
- Deep neural networks
- Dynamic programming
- Feature-based architectures
- Markovian decision problems
- Policy iteration
- Reinforcement learning
- Rollout algorithms
ASJC Scopus subject areas
- Control and Optimization
- Artificial Intelligence
- Information Systems
- Control and Systems Engineering