TY - GEN
T1 - A Distributed Stochastic Gradient Tracking Method
AU - Pu, Shi
AU - Nedich, Angelia
N1 - Funding Information:
*This work was supported in parts by the NSF grant CPS 15-44953, the NSF grant CCF-1717391 and the ONR grant no. N00014-12-1-0998.
Funding Information:
This work was supported in parts by the NSF grant CPS 15-44953, the NSF grant CCF-1717391 and the ONR grant no. N00014-12-1-0998.
Publisher Copyright:
© 2018 IEEE.
PY - 2018/7/2
Y1 - 2018/7/2
N2 - In this paper, we study the problem of distributed multi-agent optimization over a network, where each agent possesses a local cost function that is smooth and strongly convex. The global objective is to find a common solution that minimizes the average of all cost functions. Assuming agents only have access to unbiased estimates of the gradients of their local cost functions, we consider a distributed stochastic gradient tracking method. We show that, in expectation, the iterates generated by each agent are attracted to a neighborhood of the optimal solution, where they accumulate exponentially fast (under a constant step size choice). More importantly, the limiting (expected) error bounds on the distance of the iterates from the optimal solution decrease with the network size, which is a comparable performance to a centralized stochastic gradient algorithm. Numerical examples further demonstrate the effectiveness of the method.
AB - In this paper, we study the problem of distributed multi-agent optimization over a network, where each agent possesses a local cost function that is smooth and strongly convex. The global objective is to find a common solution that minimizes the average of all cost functions. Assuming agents only have access to unbiased estimates of the gradients of their local cost functions, we consider a distributed stochastic gradient tracking method. We show that, in expectation, the iterates generated by each agent are attracted to a neighborhood of the optimal solution, where they accumulate exponentially fast (under a constant step size choice). More importantly, the limiting (expected) error bounds on the distance of the iterates from the optimal solution decrease with the network size, which is a comparable performance to a centralized stochastic gradient algorithm. Numerical examples further demonstrate the effectiveness of the method.
UR - http://www.scopus.com/inward/record.url?scp=85062168618&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85062168618&partnerID=8YFLogxK
U2 - 10.1109/CDC.2018.8618708
DO - 10.1109/CDC.2018.8618708
M3 - Conference contribution
AN - SCOPUS:85062168618
T3 - Proceedings of the IEEE Conference on Decision and Control
SP - 963
EP - 968
BT - 2018 IEEE Conference on Decision and Control, CDC 2018
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 57th IEEE Conference on Decision and Control, CDC 2018
Y2 - 17 December 2018 through 19 December 2018
ER -