TY - JOUR
T1 - Stabilization of Nonlinear Discrete-Time Systems to Target Measures Using Stochastic Feedback Laws
AU - Biswal, Shiba
AU - Elamvazhuthi, Karthik
AU - Berman, Spring
N1 - Funding Information:
Manuscript received July 14, 2019; revised February 22, 2020; accepted June 7, 2020. Date of publication June 17, 2020; date of current version April 26, 2021. This work was supported by Office of Naval Research (ONR) Young Investigator Award N00014-16-1-2605. Recommended by Associate Editor Q.-S. Jia. (Corresponding author: Shiba Biswal.) Shiba Biswal and Karthik Elamvazhuthi are with the Department of Mathematics, University of California, Los Angeles, CA 90095 USA (e-mail: sbiswal@asu.edu; karthikevaz@math.ucla.edu).
Publisher Copyright:
© 1963-2012 IEEE.
PY - 2021/5
Y1 - 2021/5
N2 - In this article, we address the problem of stabilizing a discrete-time deterministic nonlinear control system to a target invariant measure using time-invariant stochastic feedback laws. This problem can be viewed as an extension of the problem of designing the transition probabilities of a Markov chain so that the process is exponentially stabilized to a target stationary distribution. Alternatively, it can be seen as an extension of the classical control problem of asymptotically stabilizing a discrete-time system to a single point, which corresponds to the Dirac measure in the measure stabilization framework. We assume that the target measure is supported on the entire state space of the system and is absolutely continuous with respect to the Lebesgue measure. Under the condition that the system is locally controllable at every point in the state space within one time step, we show that the associated measure stabilization problem is well-posed. Given this well-posedness result, we then frame an infinite-dimensional convex optimization problem to construct feedback control laws that stabilize the system to a target invariant measure at a maximized rate of convergence. We validate our optimization approach with numerical simulations of two-dimensional linear and nonlinear discrete-time control systems.
AB - In this article, we address the problem of stabilizing a discrete-time deterministic nonlinear control system to a target invariant measure using time-invariant stochastic feedback laws. This problem can be viewed as an extension of the problem of designing the transition probabilities of a Markov chain so that the process is exponentially stabilized to a target stationary distribution. Alternatively, it can be seen as an extension of the classical control problem of asymptotically stabilizing a discrete-time system to a single point, which corresponds to the Dirac measure in the measure stabilization framework. We assume that the target measure is supported on the entire state space of the system and is absolutely continuous with respect to the Lebesgue measure. Under the condition that the system is locally controllable at every point in the state space within one time step, we show that the associated measure stabilization problem is well-posed. Given this well-posedness result, we then frame an infinite-dimensional convex optimization problem to construct feedback control laws that stabilize the system to a target invariant measure at a maximized rate of convergence. We validate our optimization approach with numerical simulations of two-dimensional linear and nonlinear discrete-time control systems.
KW - Decentralized control
KW - discrete-time Markov processes
KW - multiagent systems/swarm robotics
KW - optimization
UR - http://www.scopus.com/inward/record.url?scp=85104865099&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85104865099&partnerID=8YFLogxK
U2 - 10.1109/TAC.2020.3002971
DO - 10.1109/TAC.2020.3002971
M3 - Article
AN - SCOPUS:85104865099
SN - 0018-9286
VL - 66
SP - 1957
EP - 1972
JO - IRE Transactions on Automatic Control
JF - IRE Transactions on Automatic Control
IS - 5
M1 - 9119772
ER -