A new value iteration method for the average cost dynamic programming problem

Research output: Contribution to journalArticlepeer-review

25 Scopus citations

Abstract

We propose a new value iteration method for the classical average cost Markovian decision problem, under the assumption that all stationary policies are unichain and that, furthermore, there exists a state that is recurrent under all stationary policies. This method is motivated by a relation between the average cost problem and an associated stochastic shortest path problem. Contrary to the standard relative value iteration, our method involves a weighted sup-norm contraction, and for this reason it admits a Gauss-Seidel implementation. Computational tests indicate that the Gauss-Seidel version of the new method substantially outperforms the standard method for difficult problems.

Original languageEnglish (US)
Pages (from-to)742-759
Number of pages18
JournalSIAM Journal on Control and Optimization
Volume36
Issue number2
DOIs
StatePublished - 1998
Externally publishedYes

Keywords

  • Average cost
  • Dynamic programming
  • Value iteration

ASJC Scopus subject areas

  • Control and Optimization
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'A new value iteration method for the average cost dynamic programming problem'. Together they form a unique fingerprint.

Cite this