TY - CHAP
T1 - Distributed optimization over networks
AU - Nedich, Angelia
N1 - Funding Information:
Acknowledgment The work presented in this chapter has been partially supported by the NSF grant DMS-1312907.
Publisher Copyright:
© Springer Nature Switzerland AG 2018.
PY - 2018
Y1 - 2018
N2 - The advances in wired and wireless technology necessitated the development of theory, models and tools to cope with new challenges posed by large-scale optimization problems over networks. The classical optimization methodology works under the premise that all problem data is available to some central entity (computing agent/node). This premise does not apply to large networked systems where typically each agent (node) in the network has access only to its private local information and has a local view of the network structure. This chapter will cover the development of such distributed computational models for time-varying networks, both deterministic and stochastic, which arise due to the use of different synchronous and asynchronous communication protocols in ad-hoc wireless networks. For each of these network dynamics, distributed algorithms for convex constrained minimization will be considered. In order to emphasize the role of the network structure in these approaches, our main focus will be on direct primal (sub)-gradient methods. The development of these methods combines optimization techniques with graph theory and the non-negative matrix theory, which model the network aspect. The lectures will provide some basic background theory on graphs, graph Laplacians and their properties, and the convergence results for related stochastic matrix sequences. Using the graph models and optimization techniques, the convergence and convergence rate analysis of the methods will be presented. The convergence rate results will demonstrate the dependence of the methods’ performance on the problem and the network properties, such as the network capability to diffuse the information.
AB - The advances in wired and wireless technology necessitated the development of theory, models and tools to cope with new challenges posed by large-scale optimization problems over networks. The classical optimization methodology works under the premise that all problem data is available to some central entity (computing agent/node). This premise does not apply to large networked systems where typically each agent (node) in the network has access only to its private local information and has a local view of the network structure. This chapter will cover the development of such distributed computational models for time-varying networks, both deterministic and stochastic, which arise due to the use of different synchronous and asynchronous communication protocols in ad-hoc wireless networks. For each of these network dynamics, distributed algorithms for convex constrained minimization will be considered. In order to emphasize the role of the network structure in these approaches, our main focus will be on direct primal (sub)-gradient methods. The development of these methods combines optimization techniques with graph theory and the non-negative matrix theory, which model the network aspect. The lectures will provide some basic background theory on graphs, graph Laplacians and their properties, and the convergence results for related stochastic matrix sequences. Using the graph models and optimization techniques, the convergence and convergence rate analysis of the methods will be presented. The convergence rate results will demonstrate the dependence of the methods’ performance on the problem and the network properties, such as the network capability to diffuse the information.
UR - http://www.scopus.com/inward/record.url?scp=85056573101&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85056573101&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-97142-1_1
DO - 10.1007/978-3-319-97142-1_1
M3 - Chapter
AN - SCOPUS:85056573101
T3 - Lecture Notes in Mathematics
SP - 1
EP - 84
BT - Lecture Notes in Mathematics
PB - Springer Verlag
ER -