TY - JOUR
T1 - Mean-Field Stabilization of Markov Chain Models for Robotic Swarms
T2 - Computational Approaches and Experimental Results
AU - Deshmukh, Vaibhav
AU - Elamvazhuthi, Karthik
AU - Biswal, Shiba
AU - Kakish, Zahi
AU - Berman, Spring
N1 - Funding Information:
Manuscript received September 10, 2017; accepted December 13, 2017. Date of publication January 12, 2018; date of current version March 21, 2018. This letter was recommended for publication by Associate Editor J. Ota and Editor N. Y. Chong upon evaluation of the reviewers comments. This work was supported in parts by ONR Young Investigator Award N00014-16-1-2605 and by the Arizona State University Global Security Initiative. (Corresponding author: Karthik Elamvazhuthi.) The authors are with the School for Engineering of Matter, Transport and Energy, Arizona State University, Tempe, AZ 85281 USA (e-mail: vdeshmuk@ asu.edu; karthikevaz@asu.edu; sbiswal@asu.edu; zahi.kakish@asu.edu; spring. berman@asu.edu).
Publisher Copyright:
© 2016 IEEE.
PY - 2018/7
Y1 - 2018/7
N2 - In this letter, we present two computational approaches for synthesizing decentralized density-feedback laws that asymptotically stabilize a strictly positive target equilibrium distribution of a swarm of agents among a set of states. The agents' states evolve according to a continuous-time Markov chain on a bidirected graph, and the density-feedback laws are designed to prevent the agents from switching between states at equilibrium. First, we use classical linear matrix inequality (LMI)-based tools to synthesize linear feedback laws that (locally) exponentially stabilize the desired equilibrium distribution of the corresponding mean-field model. Since these feedback laws violate positivity constraints on the control inputs, we construct rational feedback laws that respect these constraints and have the same stabilizing properties as the original feedback laws. Next, we present a sum-of-squares (SOS)-based approach to constructing polynomial feedback laws that globally stabilize an equilibrium distribution and also satisfy the positivity constraints. We validate the effectiveness of these control laws through numerical simulations with different agent populations and graph sizes and through multirobot experiments on spatial redistribution among four regions.
AB - In this letter, we present two computational approaches for synthesizing decentralized density-feedback laws that asymptotically stabilize a strictly positive target equilibrium distribution of a swarm of agents among a set of states. The agents' states evolve according to a continuous-time Markov chain on a bidirected graph, and the density-feedback laws are designed to prevent the agents from switching between states at equilibrium. First, we use classical linear matrix inequality (LMI)-based tools to synthesize linear feedback laws that (locally) exponentially stabilize the desired equilibrium distribution of the corresponding mean-field model. Since these feedback laws violate positivity constraints on the control inputs, we construct rational feedback laws that respect these constraints and have the same stabilizing properties as the original feedback laws. Next, we present a sum-of-squares (SOS)-based approach to constructing polynomial feedback laws that globally stabilize an equilibrium distribution and also satisfy the positivity constraints. We validate the effectiveness of these control laws through numerical simulations with different agent populations and graph sizes and through multirobot experiments on spatial redistribution among four regions.
KW - Swarms
KW - distributed robot systems
KW - multirobot systems
KW - optimization and optimal control
KW - probability and statistical methods
UR - http://www.scopus.com/inward/record.url?scp=85057645725&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85057645725&partnerID=8YFLogxK
U2 - 10.1109/LRA.2018.2792696
DO - 10.1109/LRA.2018.2792696
M3 - Article
AN - SCOPUS:85057645725
SN - 2377-3766
VL - 3
SP - 1985
EP - 1992
JO - IEEE Robotics and Automation Letters
JF - IEEE Robotics and Automation Letters
IS - 3
ER -