### Abstract

In this letter, we present two computational approaches for synthesizing decentralized density-feedback laws that asymptotically stabilize a strictly positive target equilibrium distribution of a swarm of agents among a set of states. The agents' states evolve according to a continuous-time Markov chain on a bidirected graph, and the density-feedback laws are designed to prevent the agents from switching between states at equilibrium. First, we use classical linear matrix inequality (LMI)-based tools to synthesize linear feedback laws that (locally) exponentially stabilize the desired equilibrium distribution of the corresponding mean-field model. Since these feedback laws violate positivity constraints on the control inputs, we construct rational feedback laws that respect these constraints and have the same stabilizing properties as the original feedback laws. Next, we present a sum-of-squares (SOS)-based approach to constructing polynomial feedback laws that globally stabilize an equilibrium distribution and also satisfy the positivity constraints. We validate the effectiveness of these control laws through numerical simulations with different agent populations and graph sizes and through multirobot experiments on spatial redistribution among four regions.

Original language | English (US) |
---|---|

Pages (from-to) | 1985-1992 |

Number of pages | 8 |

Journal | IEEE Robotics and Automation Letters |

Volume | 3 |

Issue number | 3 |

DOIs | |

State | Published - Jul 1 2018 |

### Fingerprint

### Keywords

- distributed robot systems
- multirobot systems
- optimization and optimal control
- probability and statistical methods
- Swarms

### ASJC Scopus subject areas

- Control and Systems Engineering
- Human-Computer Interaction
- Biomedical Engineering
- Mechanical Engineering
- Control and Optimization
- Artificial Intelligence
- Computer Science Applications
- Computer Vision and Pattern Recognition

### Cite this

*IEEE Robotics and Automation Letters*,

*3*(3), 1985-1992. https://doi.org/10.1109/LRA.2018.2792696

**Mean-Field Stabilization of Markov Chain Models for Robotic Swarms : Computational Approaches and Experimental Results.** / Deshmukh, Vaibhav; Elamvazhuthi, Karthik; Biswal, Shiba; Kakish, Zahi; Berman, Spring.

Research output: Contribution to journal › Article

*IEEE Robotics and Automation Letters*, vol. 3, no. 3, pp. 1985-1992. https://doi.org/10.1109/LRA.2018.2792696

}

TY - JOUR

T1 - Mean-Field Stabilization of Markov Chain Models for Robotic Swarms

T2 - Computational Approaches and Experimental Results

AU - Deshmukh, Vaibhav

AU - Elamvazhuthi, Karthik

AU - Biswal, Shiba

AU - Kakish, Zahi

AU - Berman, Spring

PY - 2018/7/1

Y1 - 2018/7/1

N2 - In this letter, we present two computational approaches for synthesizing decentralized density-feedback laws that asymptotically stabilize a strictly positive target equilibrium distribution of a swarm of agents among a set of states. The agents' states evolve according to a continuous-time Markov chain on a bidirected graph, and the density-feedback laws are designed to prevent the agents from switching between states at equilibrium. First, we use classical linear matrix inequality (LMI)-based tools to synthesize linear feedback laws that (locally) exponentially stabilize the desired equilibrium distribution of the corresponding mean-field model. Since these feedback laws violate positivity constraints on the control inputs, we construct rational feedback laws that respect these constraints and have the same stabilizing properties as the original feedback laws. Next, we present a sum-of-squares (SOS)-based approach to constructing polynomial feedback laws that globally stabilize an equilibrium distribution and also satisfy the positivity constraints. We validate the effectiveness of these control laws through numerical simulations with different agent populations and graph sizes and through multirobot experiments on spatial redistribution among four regions.

AB - In this letter, we present two computational approaches for synthesizing decentralized density-feedback laws that asymptotically stabilize a strictly positive target equilibrium distribution of a swarm of agents among a set of states. The agents' states evolve according to a continuous-time Markov chain on a bidirected graph, and the density-feedback laws are designed to prevent the agents from switching between states at equilibrium. First, we use classical linear matrix inequality (LMI)-based tools to synthesize linear feedback laws that (locally) exponentially stabilize the desired equilibrium distribution of the corresponding mean-field model. Since these feedback laws violate positivity constraints on the control inputs, we construct rational feedback laws that respect these constraints and have the same stabilizing properties as the original feedback laws. Next, we present a sum-of-squares (SOS)-based approach to constructing polynomial feedback laws that globally stabilize an equilibrium distribution and also satisfy the positivity constraints. We validate the effectiveness of these control laws through numerical simulations with different agent populations and graph sizes and through multirobot experiments on spatial redistribution among four regions.

KW - distributed robot systems

KW - multirobot systems

KW - optimization and optimal control

KW - probability and statistical methods

KW - Swarms

UR - http://www.scopus.com/inward/record.url?scp=85057645725&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85057645725&partnerID=8YFLogxK

U2 - 10.1109/LRA.2018.2792696

DO - 10.1109/LRA.2018.2792696

M3 - Article

AN - SCOPUS:85057645725

VL - 3

SP - 1985

EP - 1992

JO - IEEE Robotics and Automation Letters

JF - IEEE Robotics and Automation Letters

SN - 2377-3766

IS - 3

ER -