Policy approximation in policy iteration approximate dynamic programming for discrete-time nonlinear systems

Wentao Guo, Jennie Si, Feng Liu, Shengwei Mei

Research output: Contribution to journalArticlepeer-review

32 Scopus citations

Abstract

Policy iteration approximate dynamic programming (DP) is an important algorithm for solving optimal decision and control problems. In this paper, we focus on the problem associated with policy approximation in policy iteration approximate DP for discrete-time nonlinear systems using infinite-horizon undiscounted value functions. Taking policy approximation error into account, we demonstrate asymptotic stability of the control policy under our problem setting, show boundedness of the value function during each policy iteration step, and introduce a new sufficient condition for the value function to converge to a bounded neighborhood of the optimal value function. Aiming for practical implementation of an approximate policy, we consider using Volterra series, which has been extensively covered in controls literature for its good theoretical properties and for its success in practical applications. We illustrate the effectiveness of the main ideas developed in this paper using several examples including a practical problem of excitation control of a hydrogenerator.

Original languageEnglish (US)
Pages (from-to)2794-2807
Number of pages14
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume29
Issue number7
DOIs
StatePublished - Jul 2018

Keywords

  • Approximate dynamic programming (DP)
  • Volterra series
  • convergence
  • error bound
  • policy approximation
  • policy iteration

ASJC Scopus subject areas

  • Software
  • Computer Science Applications
  • Computer Networks and Communications
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Policy approximation in policy iteration approximate dynamic programming for discrete-time nonlinear systems'. Together they form a unique fingerprint.

Cite this