## Abstract

We propose a new value iteration method for the classical average cost Markovian Decision problem, under the assumption that all stationary policies are unichain and furthermore there exists a state that is recurrent under all stationary policies. This method is motivated by a relation between the average cost problem and an associated stochastic shortest path problem. Contrary to the standard relative value iteration, our method involves a weighted sup norm contraction and for this reason it admits a Gauss-Seidel and an asynchronous implementation. Computational tests indicate that the Gauss-Seidel version of the new method substantially outperforms the standard method for difficult problems. The contraction property also makes the method a suitable basis for the development of asynchronous Q-learning methods.

Original language | English (US) |
---|---|

Pages (from-to) | 2692-2697 |

Number of pages | 6 |

Journal | Proceedings of the IEEE Conference on Decision and Control |

Volume | 3 |

State | Published - 1998 |

Externally published | Yes |

Event | Proceedings of the 1998 37th IEEE Conference on Decision and Control (CDC) - Tampa, FL, USA Duration: Dec 16 1998 → Dec 18 1998 |

## ASJC Scopus subject areas

- Control and Systems Engineering
- Modeling and Simulation
- Control and Optimization