Regular policies in abstract dynamic programminG

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

We consider challenging dynamic programming models where the associated Bellman equation, and the value and policy iteration algorithms commonly exhibit complex and even pathological behavior. Our analysis is based on the new notion of regular policies. These are policies that are well-behaved with respect to value and policy iteration, and are patterned after proper policies, which are central in the theory of stochastic shortest path problems. We show that the optimal cost function over regular policies may have favorable value and policy iteration properties, which the optimal cost function over all policies need not have. We accordingly develop a unifying methodology to address long standing analytical and algorithmic issues in broad classes of undiscounted models, including stochastic and minimax shortest path problems, as well as positive cost, negative cost, risk-sensitive, and multiplicative cost problems.

Original languageEnglish (US)
Pages (from-to)1694-1727
Number of pages34
JournalSIAM Journal on Optimization
Volume27
Issue number3
DOIs
StatePublished - 2017
Externally publishedYes

Keywords

  • Abstract dynamic programming
  • Discrete-time optimal control
  • Policy iteration
  • Shortest path
  • Value iteration

ASJC Scopus subject areas

  • Software
  • Theoretical Computer Science

Fingerprint

Dive into the research topics of 'Regular policies in abstract dynamic programminG'. Together they form a unique fingerprint.

Cite this