Proper Policies in Infinite-State Stochastic Shortest Path Problems

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

We consider stochastic shortest path problems with infinite state and control spaces, a nonnegative cost per stage, and a termination state. We extend the notion of a proper policy, a policy that terminates within a finite expected number of steps, from the context of finite state space to the context of infinite state space. We consider the optimal cost function J∗, and the optimal cost function Ĵ over just the proper policies. We show that J∗ and Ĵ are the smallest and largest solutions of Bellman's equation, respectively, within a suitable class of Lyapounov-like functions. If the cost per stage is bounded, these functions are those that are bounded over the effective domain of Ĵ. The standard value iteration algorithm may be attracted to either J∗ or Ĵ, depending on the initial condition.

Original languageEnglish (US)
Article number8309409
Pages (from-to)3787-3792
Number of pages6
JournalIEEE Transactions on Automatic Control
Volume63
Issue number11
DOIs
StatePublished - Nov 2018
Externally publishedYes

Keywords

  • Dynamic programming
  • Markov decision processes
  • stochastic optimal control
  • stochastic shortest paths (SSPs)

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Computer Science Applications
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Proper Policies in Infinite-State Stochastic Shortest Path Problems'. Together they form a unique fingerprint.

Cite this