Markovian state and action abstractions for MDPs via hierarchical MCTS

Aijun Bai, Siddharth Srivastava, Stuart Russell

Research output: Contribution to journalConference article

8 Citations (Scopus)

Abstract

State abstraction is an important technique for scaling MDP algorithms. As is well known, however, it introduces difficulties due to the non-Markovian nature of state-abstracted models. Whereas prior approaches rely upon ad hoc fixes for this issue, we propose instead to view the state-abstracted model as a POMDP and show that we can thereby take advantage of state abstraction without sacrificing the Markov property. We further exploit the hierarchical structure introduced by state abstraction by extending the theory of options to a POMDP setting. In this context we propose a hierarchical Monte Carlo tree search algorithm and show that it converges to a recursively optimal hierarchical policy. Both theoretical and empirical results suggest that abstracting an MDP into a POMDP yields a scalable solution approach.

Original languageEnglish (US)
Pages (from-to)3029-3037
Number of pages9
JournalIJCAI International Joint Conference on Artificial Intelligence
Volume2016-January
StatePublished - Jan 1 2016
Externally publishedYes
Event25th International Joint Conference on Artificial Intelligence, IJCAI 2016 - New York, United States
Duration: Jul 9 2016Jul 15 2016

ASJC Scopus subject areas

  • Artificial Intelligence

Cite this

Markovian state and action abstractions for MDPs via hierarchical MCTS. / Bai, Aijun; Srivastava, Siddharth; Russell, Stuart.

In: IJCAI International Joint Conference on Artificial Intelligence, Vol. 2016-January, 01.01.2016, p. 3029-3037.

Research output: Contribution to journalConference article

@article{918c5fb8894247c28e87e246abbfb01b,
title = "Markovian state and action abstractions for MDPs via hierarchical MCTS",
abstract = "State abstraction is an important technique for scaling MDP algorithms. As is well known, however, it introduces difficulties due to the non-Markovian nature of state-abstracted models. Whereas prior approaches rely upon ad hoc fixes for this issue, we propose instead to view the state-abstracted model as a POMDP and show that we can thereby take advantage of state abstraction without sacrificing the Markov property. We further exploit the hierarchical structure introduced by state abstraction by extending the theory of options to a POMDP setting. In this context we propose a hierarchical Monte Carlo tree search algorithm and show that it converges to a recursively optimal hierarchical policy. Both theoretical and empirical results suggest that abstracting an MDP into a POMDP yields a scalable solution approach.",
author = "Aijun Bai and Siddharth Srivastava and Stuart Russell",
year = "2016",
month = "1",
day = "1",
language = "English (US)",
volume = "2016-January",
pages = "3029--3037",
journal = "IJCAI International Joint Conference on Artificial Intelligence",
issn = "1045-0823",

}

TY - JOUR

T1 - Markovian state and action abstractions for MDPs via hierarchical MCTS

AU - Bai, Aijun

AU - Srivastava, Siddharth

AU - Russell, Stuart

PY - 2016/1/1

Y1 - 2016/1/1

N2 - State abstraction is an important technique for scaling MDP algorithms. As is well known, however, it introduces difficulties due to the non-Markovian nature of state-abstracted models. Whereas prior approaches rely upon ad hoc fixes for this issue, we propose instead to view the state-abstracted model as a POMDP and show that we can thereby take advantage of state abstraction without sacrificing the Markov property. We further exploit the hierarchical structure introduced by state abstraction by extending the theory of options to a POMDP setting. In this context we propose a hierarchical Monte Carlo tree search algorithm and show that it converges to a recursively optimal hierarchical policy. Both theoretical and empirical results suggest that abstracting an MDP into a POMDP yields a scalable solution approach.

AB - State abstraction is an important technique for scaling MDP algorithms. As is well known, however, it introduces difficulties due to the non-Markovian nature of state-abstracted models. Whereas prior approaches rely upon ad hoc fixes for this issue, we propose instead to view the state-abstracted model as a POMDP and show that we can thereby take advantage of state abstraction without sacrificing the Markov property. We further exploit the hierarchical structure introduced by state abstraction by extending the theory of options to a POMDP setting. In this context we propose a hierarchical Monte Carlo tree search algorithm and show that it converges to a recursively optimal hierarchical policy. Both theoretical and empirical results suggest that abstracting an MDP into a POMDP yields a scalable solution approach.

UR - http://www.scopus.com/inward/record.url?scp=85006153479&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85006153479&partnerID=8YFLogxK

M3 - Conference article

AN - SCOPUS:85006153479

VL - 2016-January

SP - 3029

EP - 3037

JO - IJCAI International Joint Conference on Artificial Intelligence

JF - IJCAI International Joint Conference on Artificial Intelligence

SN - 1045-0823

ER -