Abstract
State abstraction is an important technique for scaling MDP algorithms. As is well known, however, it introduces difficulties due to the non-Markovian nature of state-abstracted models. Whereas prior approaches rely upon ad hoc fixes for this issue, we propose instead to view the state-abstracted model as a POMDP and show that we can thereby take advantage of state abstraction without sacrificing the Markov property. We further exploit the hierarchical structure introduced by state abstraction by extending the theory of options to a POMDP setting. In this context we propose a hierarchical Monte Carlo tree search algorithm and show that it converges to a recursively optimal hierarchical policy. Both theoretical and empirical results suggest that abstracting an MDP into a POMDP yields a scalable solution approach.
Original language | English (US) |
---|---|
Pages (from-to) | 3029-3037 |
Number of pages | 9 |
Journal | IJCAI International Joint Conference on Artificial Intelligence |
Volume | 2016-January |
State | Published - 2016 |
Externally published | Yes |
Event | 25th International Joint Conference on Artificial Intelligence, IJCAI 2016 - New York, United States Duration: Jul 9 2016 → Jul 15 2016 |
ASJC Scopus subject areas
- Artificial Intelligence