Distributed bregman-distance algorithms for min-max optimization

Kunal Srivastava, Angelia Nedić, Dušan Stipanović

Research output: Chapter in Book/Report/Conference proceedingChapter

13 Scopus citations

Abstract

We consider a min-max optimization problem over a time-varying network of computational agents, where each agent in the network has its local convex cost function which is a private knowledge of the agent. The agents want to jointly minimize the maximum cost incurred by any agent in the network, while maintaining the privacy of their objective functions. To solve the problem, we consider subgradient algorithms where each agent computes its own estimates of an optimal point based on its own cost function, and it communicates these estimates to its neighbors in the network. The algorithms employ techniques from convex optimization, stochastic approximation and averaging protocols (typically used to ensure a proper information diffusion over a network), which allow time-varying network structure. We discuss two algorithms, one based on exact-penalty approach and the other based on primal-dual Lagrangian approach, where both approaches utilize Bregman-distance functions.We establish convergence of the algorithms (with probability one) for a diminishing step-size, and demonstrate the applicability of the algorithms by considering a power allocation problem in a cellular network.

Original languageEnglish (US)
Title of host publicationAgent-Based Optimization
PublisherSpringer Verlag
Pages143-174
Number of pages32
ISBN (Print)9783642340963
DOIs
StatePublished - Jan 1 2013
Externally publishedYes

Publication series

NameStudies in Computational Intelligence
Volume456
ISSN (Print)1860-949X

ASJC Scopus subject areas

  • Artificial Intelligence

Fingerprint Dive into the research topics of 'Distributed bregman-distance algorithms for min-max optimization'. Together they form a unique fingerprint.

Cite this