## Abstract

In this paper, we consider a broad class of infinite horizon discrete-time optimal control models that involve a nonnegative cost function and an affine mapping in their dynamic programming equation. They include as special cases several classical models, such as stochastic undiscounted nonnegative cost problems, stochastic multiplicative cost problems, and risk-sensitive problems with exponential cost. We focus on the case where the state space is finite and the control space has some compactness properties, and we emphasize shortest path-type models. We assume that the affine mapping has a semicontractive character, whereby for some policies it is a contraction, whereas for others it is not. In one line of analysis, we impose assumptions guaranteeing that the noncontractive policies cannot be optimal. Under these assumptions, we prove strong results that resemble those for discounted Markovian decision problems, such as the uniqueness of solution of Bellman's equation, and the validity of forms of value and policy iteration. In the absence of these assumptions, the results are weaker and unusual in character: the optimal cost function need not be a solution of Bellman's equation, and may not be found by value or policy iteration. Instead the optimal cost function over just the contractive policies is the largest solution of Bellman's equation, and can be computed by a variety of algorithms.

Original language | English (US) |
---|---|

Article number | 8629039 |

Pages (from-to) | 3117-3128 |

Number of pages | 12 |

Journal | IEEE Transactions on Automatic Control |

Volume | 64 |

Issue number | 8 |

DOIs | |

State | Published - Aug 2019 |

Externally published | Yes |

## Keywords

- Dynamic programming (DP)
- Markov decision processes
- risk sensitive control
- stochastic shortest paths

## ASJC Scopus subject areas

- Control and Systems Engineering
- Computer Science Applications
- Electrical and Electronic Engineering