Abstract
Extensive approximate dynamic programming (ADP) algorithms have been developed based on policy iteration. For policy iteration based ADP of deterministic discrete-time nonlinear systems, existing literature has proved its convergence in the formulation of undiscounted value function under the assumption of exact approximation. Furthermore, the error bound of policy iteration based ADP has been analyzed in a discounted value function formulation with consideration of approximation errors. However, there has not been any error bound analysis of policy iteration based ADP in the undiscounted value function formulation with consideration of approximation errors. In this paper, we intend to fill this theoretical gap. We provide a sufficient condition on the approximation error, so that the iterative value function can be bounded in a neighbourhood of the optimal value function. To the best of the authors' knowledge, this is the first error bound result of the undiscounted policy iteration for deterministic discrete-time nonlinear systems considering approximation errors.
Original language | English (US) |
---|---|
Title of host publication | Proceedings of the International Joint Conference on Neural Networks |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Volume | 2015-September |
ISBN (Print) | 9781479919604, 9781479919604, 9781479919604, 9781479919604 |
DOIs | |
State | Published - Sep 28 2015 |
Event | International Joint Conference on Neural Networks, IJCNN 2015 - Killarney, Ireland Duration: Jul 12 2015 → Jul 17 2015 |
Other
Other | International Joint Conference on Neural Networks, IJCNN 2015 |
---|---|
Country/Territory | Ireland |
City | Killarney |
Period | 7/12/15 → 7/17/15 |
Keywords
- Approximation algorithms
- Approximation methods
- Mathematical model
ASJC Scopus subject areas
- Software
- Artificial Intelligence