Distributed Gradient Methods for Convex Machine Learning Problems in Networks: Distributed Optimization

Research output: Contribution to journalArticlepeer-review

99 Scopus citations

Abstract

This article provides an overview of distributed gradient methods for solving convex machine learning problems of the form min 1/m i f (x) m x !Rn ^ hR = 1 i in a system consisting of m agents that are embedded in a communication network. Each agent i has a collection of data captured by its privately known objective function fi (x) . The distributed algorithms considered here obey two simple rules: privately known agent functions fi (x) cannot be disclosed to any other agent in the network and every agent is aware of the local connectivity structure of the network, i.e., it knows its one-hop neighbors only. While obeying these two rules, the distributed algorithms that agents execute should find a solution to the overall system problem with the limited knowledge of the objective function and limited local communications. Given in this article is an overview of such algorithms that typically involve two update steps: A gradient step based on the agent local objective function and a mixing step that essentially diffuses relevant information from one to all other agents in the network.

Original languageEnglish (US)
Article number9084356
Pages (from-to)92-101
Number of pages10
JournalIEEE Signal Processing Magazine
Volume37
Issue number3
DOIs
StatePublished - May 2020

ASJC Scopus subject areas

  • Signal Processing
  • Electrical and Electronic Engineering
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Distributed Gradient Methods for Convex Machine Learning Problems in Networks: Distributed Optimization'. Together they form a unique fingerprint.

Cite this