Abstract
In this paper, we construct a mean-field discrete-time Markov process evolving on a compact subset of Rd can be stabilized to an arbitrary target distribution that has a continuous density. This density, unlike in our previous works, need not have a connected support on the state space. Our main application of interest is characterizing the distribution of a multi-agent system that evolves according to a discrete-time Markov process. Even if the Markov process converges to an equilibrium distribution, the agents may continue to switch between states, potentially wasting energy. In order to prevent this unnecessary switching, we show that the Markov process can be constructed in such a way that the operator that pushes forward measures is the identity operator at the target measure. The challenge in the stability analysis of the system arises from the fact that the transition kernel is a function of the current distribution, resulting in a nonlinear Markov process. Moreover, we aim to design the transition kernel, which is the feedback control law for the Markov process, to be decentralized in the sense that it depends on the local density of agents. We prove by construction that there exists a control law that is decentralized and globally stabilizes the desired measure. In order to implement this control law, the individual agents must estimate the local population density. We validate our control law with numerical simulations of multi-agent systems with different population sizes. We observe that the number of agent state transitions at equilibrium significantly decreases as the population size increases.
Original language | English (US) |
---|---|
Pages (from-to) | 60-65 |
Number of pages | 6 |
Journal | IFAC-PapersOnLine |
Volume | 54 |
Issue number | 9 |
DOIs | |
State | Published - Jun 1 2021 |
Event | 24th International Symposium on Mathematical Theory of Networks and Systems, MTNS 2020 - Cambridge, United Kingdom Duration: Aug 23 2021 → Aug 27 2021 |
Keywords
- Large scale systems
- Operator theoretic methods in systems theory
- Robotics
- Stochastic modeling and stochastic systems theory
ASJC Scopus subject areas
- Control and Systems Engineering