Convergence of Discretization Procedures in Dynamic Programming

Research output: Contribution to journalArticlepeer-review

75 Scopus citations

Abstract

The computational solution of discrete-time stochastic optimal control problems by dynamic programming requires, in most cases, discretization of the state and control spaces whenever these spaces are infinite. In this short paper we consider a discretization procedure often employed in practice. Under certain compactness and Lipschitz continuity assumptions we show that the solution of the discretized algorithm converges to the solution of the continuous algorithm, as the discretization grids become finer and finer. Furthermore, any control law obtained from the discretized algorithm results in a value of the cost functional which converges to the optimal value of the problem.

Original languageEnglish (US)
Pages (from-to)415-419
Number of pages5
JournalIEEE Transactions on Automatic Control
Volume20
Issue number3
DOIs
StatePublished - Jun 1975
Externally publishedYes

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Computer Science Applications
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Convergence of Discretization Procedures in Dynamic Programming'. Together they form a unique fingerprint.

Cite this