Error bound analysis of policy iteration based approximate dynamic programming for deterministic discrete-time nonlinear systems

Wentao Guo, Feng Liu, Jennie Si, Shengwei Mei, Rui Li

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

Extensive approximate dynamic programming (ADP) algorithms have been developed based on policy iteration. For policy iteration based ADP of deterministic discrete-time nonlinear systems, existing literature has proved its convergence in the formulation of undiscounted value function under the assumption of exact approximation. Furthermore, the error bound of policy iteration based ADP has been analyzed in a discounted value function formulation with consideration of approximation errors. However, there has not been any error bound analysis of policy iteration based ADP in the undiscounted value function formulation with consideration of approximation errors. In this paper, we intend to fill this theoretical gap. We provide a sufficient condition on the approximation error, so that the iterative value function can be bounded in a neighbourhood of the optimal value function. To the best of the authors' knowledge, this is the first error bound result of the undiscounted policy iteration for deterministic discrete-time nonlinear systems considering approximation errors.

Original languageEnglish (US)
Title of host publicationProceedings of the International Joint Conference on Neural Networks
PublisherInstitute of Electrical and Electronics Engineers Inc.
Volume2015-September
ISBN (Print)9781479919604, 9781479919604, 9781479919604, 9781479919604
DOIs
StatePublished - Sep 28 2015
EventInternational Joint Conference on Neural Networks, IJCNN 2015 - Killarney, Ireland
Duration: Jul 12 2015Jul 17 2015

Other

OtherInternational Joint Conference on Neural Networks, IJCNN 2015
CountryIreland
CityKillarney
Period7/12/157/17/15

Fingerprint

Dynamic programming
Nonlinear systems

Keywords

  • Approximation algorithms
  • Approximation methods
  • Mathematical model

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence

Cite this

Guo, W., Liu, F., Si, J., Mei, S., & Li, R. (2015). Error bound analysis of policy iteration based approximate dynamic programming for deterministic discrete-time nonlinear systems. In Proceedings of the International Joint Conference on Neural Networks (Vol. 2015-September). [7280783] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/IJCNN.2015.7280783

Error bound analysis of policy iteration based approximate dynamic programming for deterministic discrete-time nonlinear systems. / Guo, Wentao; Liu, Feng; Si, Jennie; Mei, Shengwei; Li, Rui.

Proceedings of the International Joint Conference on Neural Networks. Vol. 2015-September Institute of Electrical and Electronics Engineers Inc., 2015. 7280783.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Guo, W, Liu, F, Si, J, Mei, S & Li, R 2015, Error bound analysis of policy iteration based approximate dynamic programming for deterministic discrete-time nonlinear systems. in Proceedings of the International Joint Conference on Neural Networks. vol. 2015-September, 7280783, Institute of Electrical and Electronics Engineers Inc., International Joint Conference on Neural Networks, IJCNN 2015, Killarney, Ireland, 7/12/15. https://doi.org/10.1109/IJCNN.2015.7280783
Guo W, Liu F, Si J, Mei S, Li R. Error bound analysis of policy iteration based approximate dynamic programming for deterministic discrete-time nonlinear systems. In Proceedings of the International Joint Conference on Neural Networks. Vol. 2015-September. Institute of Electrical and Electronics Engineers Inc. 2015. 7280783 https://doi.org/10.1109/IJCNN.2015.7280783
Guo, Wentao ; Liu, Feng ; Si, Jennie ; Mei, Shengwei ; Li, Rui. / Error bound analysis of policy iteration based approximate dynamic programming for deterministic discrete-time nonlinear systems. Proceedings of the International Joint Conference on Neural Networks. Vol. 2015-September Institute of Electrical and Electronics Engineers Inc., 2015.
@inproceedings{a9e77edf22fd47639e12dc37ea6bdded,
title = "Error bound analysis of policy iteration based approximate dynamic programming for deterministic discrete-time nonlinear systems",
abstract = "Extensive approximate dynamic programming (ADP) algorithms have been developed based on policy iteration. For policy iteration based ADP of deterministic discrete-time nonlinear systems, existing literature has proved its convergence in the formulation of undiscounted value function under the assumption of exact approximation. Furthermore, the error bound of policy iteration based ADP has been analyzed in a discounted value function formulation with consideration of approximation errors. However, there has not been any error bound analysis of policy iteration based ADP in the undiscounted value function formulation with consideration of approximation errors. In this paper, we intend to fill this theoretical gap. We provide a sufficient condition on the approximation error, so that the iterative value function can be bounded in a neighbourhood of the optimal value function. To the best of the authors' knowledge, this is the first error bound result of the undiscounted policy iteration for deterministic discrete-time nonlinear systems considering approximation errors.",
keywords = "Approximation algorithms, Approximation methods, Mathematical model",
author = "Wentao Guo and Feng Liu and Jennie Si and Shengwei Mei and Rui Li",
year = "2015",
month = "9",
day = "28",
doi = "10.1109/IJCNN.2015.7280783",
language = "English (US)",
isbn = "9781479919604",
volume = "2015-September",
booktitle = "Proceedings of the International Joint Conference on Neural Networks",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - GEN

T1 - Error bound analysis of policy iteration based approximate dynamic programming for deterministic discrete-time nonlinear systems

AU - Guo, Wentao

AU - Liu, Feng

AU - Si, Jennie

AU - Mei, Shengwei

AU - Li, Rui

PY - 2015/9/28

Y1 - 2015/9/28

N2 - Extensive approximate dynamic programming (ADP) algorithms have been developed based on policy iteration. For policy iteration based ADP of deterministic discrete-time nonlinear systems, existing literature has proved its convergence in the formulation of undiscounted value function under the assumption of exact approximation. Furthermore, the error bound of policy iteration based ADP has been analyzed in a discounted value function formulation with consideration of approximation errors. However, there has not been any error bound analysis of policy iteration based ADP in the undiscounted value function formulation with consideration of approximation errors. In this paper, we intend to fill this theoretical gap. We provide a sufficient condition on the approximation error, so that the iterative value function can be bounded in a neighbourhood of the optimal value function. To the best of the authors' knowledge, this is the first error bound result of the undiscounted policy iteration for deterministic discrete-time nonlinear systems considering approximation errors.

AB - Extensive approximate dynamic programming (ADP) algorithms have been developed based on policy iteration. For policy iteration based ADP of deterministic discrete-time nonlinear systems, existing literature has proved its convergence in the formulation of undiscounted value function under the assumption of exact approximation. Furthermore, the error bound of policy iteration based ADP has been analyzed in a discounted value function formulation with consideration of approximation errors. However, there has not been any error bound analysis of policy iteration based ADP in the undiscounted value function formulation with consideration of approximation errors. In this paper, we intend to fill this theoretical gap. We provide a sufficient condition on the approximation error, so that the iterative value function can be bounded in a neighbourhood of the optimal value function. To the best of the authors' knowledge, this is the first error bound result of the undiscounted policy iteration for deterministic discrete-time nonlinear systems considering approximation errors.

KW - Approximation algorithms

KW - Approximation methods

KW - Mathematical model

UR - http://www.scopus.com/inward/record.url?scp=84951023469&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84951023469&partnerID=8YFLogxK

U2 - 10.1109/IJCNN.2015.7280783

DO - 10.1109/IJCNN.2015.7280783

M3 - Conference contribution

AN - SCOPUS:84951023469

SN - 9781479919604

SN - 9781479919604

SN - 9781479919604

SN - 9781479919604

VL - 2015-September

BT - Proceedings of the International Joint Conference on Neural Networks

PB - Institute of Electrical and Electronics Engineers Inc.

ER -