Approximate robust policy iteration for discounted infinite-horizon Markov decision processes with uncertain stationary parametric transition matrices

Baohua Li, Jennie Si

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We consider Markov decision processes with finite states, finite actions, and discounted infinite-horizon cost in the deterministic policy space. State transition matrices are uncertain but with stationary parameterization. The uncertainty in transition matrices signifies realistic considerations that an accurate system model is not available for the controller design due to limitations in estimation methods and model deficiencies. Based on the quadratic total value function formulation, two approximate robust policy iterations are developed, the performance errors of which are guaranteed to be within an arbitrarily small error bound. The two approximations make use of iterative aggregation and multilayer perceptron, respectively. It is proved that the robust policy iteration based on approximation with iterative aggregation converges surely to a stationary optimal or near-optimal policy, and also that under some conditions the robust policy iteration based on approximation with multilayer perceptron converges in a probability sense to a stationary near-optimal policy. Furthermore, under some assumptions, the stationary solutions are guaranteed to be near-optimal in the deterministic policy space.

Original languageEnglish (US)
Title of host publicationIEEE International Conference on Neural Networks - Conference Proceedings
Pages2052-2057
Number of pages6
DOIs
StatePublished - 2007
Event2007 International Joint Conference on Neural Networks, IJCNN 2007 - Orlando, FL, United States
Duration: Aug 12 2007Aug 17 2007

Other

Other2007 International Joint Conference on Neural Networks, IJCNN 2007
CountryUnited States
CityOrlando, FL
Period8/12/078/17/07

Fingerprint

Multilayer neural networks
Agglomeration
Parameterization
Controllers
Costs
Uncertainty

ASJC Scopus subject areas

  • Software

Cite this

Approximate robust policy iteration for discounted infinite-horizon Markov decision processes with uncertain stationary parametric transition matrices. / Li, Baohua; Si, Jennie.

IEEE International Conference on Neural Networks - Conference Proceedings. 2007. p. 2052-2057 4371274.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Li, B & Si, J 2007, Approximate robust policy iteration for discounted infinite-horizon Markov decision processes with uncertain stationary parametric transition matrices. in IEEE International Conference on Neural Networks - Conference Proceedings., 4371274, pp. 2052-2057, 2007 International Joint Conference on Neural Networks, IJCNN 2007, Orlando, FL, United States, 8/12/07. https://doi.org/10.1109/IJCNN.2007.4371274
Li, Baohua ; Si, Jennie. / Approximate robust policy iteration for discounted infinite-horizon Markov decision processes with uncertain stationary parametric transition matrices. IEEE International Conference on Neural Networks - Conference Proceedings. 2007. pp. 2052-2057
@inproceedings{7234a1311bcb4a29b296c6d0cbf3549a,
title = "Approximate robust policy iteration for discounted infinite-horizon Markov decision processes with uncertain stationary parametric transition matrices",
abstract = "We consider Markov decision processes with finite states, finite actions, and discounted infinite-horizon cost in the deterministic policy space. State transition matrices are uncertain but with stationary parameterization. The uncertainty in transition matrices signifies realistic considerations that an accurate system model is not available for the controller design due to limitations in estimation methods and model deficiencies. Based on the quadratic total value function formulation, two approximate robust policy iterations are developed, the performance errors of which are guaranteed to be within an arbitrarily small error bound. The two approximations make use of iterative aggregation and multilayer perceptron, respectively. It is proved that the robust policy iteration based on approximation with iterative aggregation converges surely to a stationary optimal or near-optimal policy, and also that under some conditions the robust policy iteration based on approximation with multilayer perceptron converges in a probability sense to a stationary near-optimal policy. Furthermore, under some assumptions, the stationary solutions are guaranteed to be near-optimal in the deterministic policy space.",
author = "Baohua Li and Jennie Si",
year = "2007",
doi = "10.1109/IJCNN.2007.4371274",
language = "English (US)",
isbn = "142441380X",
pages = "2052--2057",
booktitle = "IEEE International Conference on Neural Networks - Conference Proceedings",

}

TY - GEN

T1 - Approximate robust policy iteration for discounted infinite-horizon Markov decision processes with uncertain stationary parametric transition matrices

AU - Li, Baohua

AU - Si, Jennie

PY - 2007

Y1 - 2007

N2 - We consider Markov decision processes with finite states, finite actions, and discounted infinite-horizon cost in the deterministic policy space. State transition matrices are uncertain but with stationary parameterization. The uncertainty in transition matrices signifies realistic considerations that an accurate system model is not available for the controller design due to limitations in estimation methods and model deficiencies. Based on the quadratic total value function formulation, two approximate robust policy iterations are developed, the performance errors of which are guaranteed to be within an arbitrarily small error bound. The two approximations make use of iterative aggregation and multilayer perceptron, respectively. It is proved that the robust policy iteration based on approximation with iterative aggregation converges surely to a stationary optimal or near-optimal policy, and also that under some conditions the robust policy iteration based on approximation with multilayer perceptron converges in a probability sense to a stationary near-optimal policy. Furthermore, under some assumptions, the stationary solutions are guaranteed to be near-optimal in the deterministic policy space.

AB - We consider Markov decision processes with finite states, finite actions, and discounted infinite-horizon cost in the deterministic policy space. State transition matrices are uncertain but with stationary parameterization. The uncertainty in transition matrices signifies realistic considerations that an accurate system model is not available for the controller design due to limitations in estimation methods and model deficiencies. Based on the quadratic total value function formulation, two approximate robust policy iterations are developed, the performance errors of which are guaranteed to be within an arbitrarily small error bound. The two approximations make use of iterative aggregation and multilayer perceptron, respectively. It is proved that the robust policy iteration based on approximation with iterative aggregation converges surely to a stationary optimal or near-optimal policy, and also that under some conditions the robust policy iteration based on approximation with multilayer perceptron converges in a probability sense to a stationary near-optimal policy. Furthermore, under some assumptions, the stationary solutions are guaranteed to be near-optimal in the deterministic policy space.

UR - http://www.scopus.com/inward/record.url?scp=51749105226&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=51749105226&partnerID=8YFLogxK

U2 - 10.1109/IJCNN.2007.4371274

DO - 10.1109/IJCNN.2007.4371274

M3 - Conference contribution

AN - SCOPUS:51749105226

SN - 142441380X

SN - 9781424413805

SP - 2052

EP - 2057

BT - IEEE International Conference on Neural Networks - Conference Proceedings

ER -