Policy Synthesis for Switched Linear Systems with Markov Decision Process Switching

Bo Wu, Murat Cubuktepe, Franck Djeumou, Zhe Xu, Ufuk Topcu

Research output: Contribution to journalArticlepeer-review

Abstract

We study the synthesis of mode switching protocols for a class of discrete-time switched linear systems in which the mode jumps are governed by Markov decision processes (MDPs). We call such systems MDP-JLS for brevity. Each state of the MDP corresponds to a mode in the switched system. The probabilistic state transitions in the MDP represent the mode transitions. We focus on finding a policy that selects the switching actions at each mode such that the switched system is guaranteed to be stable. Given a policy in the MDP, the considered MDP-JLS reduces to a Markov jump linear system (MJLS). We consider both mean-square stability and stability with probability one. For mean-square stability, we leverage existing stability conditions for MJLSs and propose efficient semidefinite programming formulations to find a stabilizing policy in the MDP. For stability with probability one, we derive new sufficient conditions and compute a stabilizing policy using linear programming. We also extend the policy synthesis results to MDP-JLS with uncertain mode transition probabilities.

Original languageEnglish (US)
JournalIEEE Transactions on Automatic Control
DOIs
StateAccepted/In press - 2022
Externally publishedYes

Keywords

  • Linear systems
  • Markov decision processes
  • Markov processes
  • Optimization
  • optimization
  • Probabilistic logic
  • Stability criteria
  • switched systems
  • Switched systems
  • Switches

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Computer Science Applications
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Policy Synthesis for Switched Linear Systems with Markov Decision Process Switching'. Together they form a unique fingerprint.

Cite this