We consider the problem of stabilizing a swarm of agents to a target probability distribution among a set of states, given that the agents' states evolve according to an interacting system of continuous time Markov chains (CTMCs). We construct a class of density-feedback laws, i.e., control laws that are functions of the swarm population density, that achieve this objective provided that the graph associated with the CTMCs is strongly connected. To execute these control laws, each agent only requires information on the population fraction of agents that are in its current state. Additionally, the control laws ensure that there are no state transitions by agents at equilibrium, which is a known drawback of stabilization using time- and density-independent control laws. We guarantee global asymptotic stability of the equilibrium distribution by analyzing the corresponding mean-field model. The fact that any probability distribution can be globally stabilized is a significant extension of previous mean-field based approaches that control swarms of agents using time-invariant control laws, which require the equilibrium distribution to have a strongly connected support. To admit feedback laws that take values only on a discrete set, we consider control laws that can be discontinuous functions of the agent densities. We validate the control laws using stochastic simulations of the CTMC model and numerical simulations of the mean-field model.