A Distributed Stochastic Gradient Tracking Method

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Abstract

    In this paper, we study the problem of distributed multi-agent optimization over a network, where each agent possesses a local cost function that is smooth and strongly convex. The global objective is to find a common solution that minimizes the average of all cost functions. Assuming agents only have access to unbiased estimates of the gradients of their local cost functions, we consider a distributed stochastic gradient tracking method. We show that, in expectation, the iterates generated by each agent are attracted to a neighborhood of the optimal solution, where they accumulate exponentially fast (under a constant step size choice). More importantly, the limiting (expected) error bounds on the distance of the iterates from the optimal solution decrease with the network size, which is a comparable performance to a centralized stochastic gradient algorithm. Numerical examples further demonstrate the effectiveness of the method.

    Original languageEnglish (US)
    Title of host publication2018 IEEE Conference on Decision and Control, CDC 2018
    PublisherInstitute of Electrical and Electronics Engineers Inc.
    Pages963-968
    Number of pages6
    ISBN (Electronic)9781538613955
    DOIs
    StatePublished - Jan 18 2019
    Event57th IEEE Conference on Decision and Control, CDC 2018 - Miami, United States
    Duration: Dec 17 2018Dec 19 2018

    Publication series

    NameProceedings of the IEEE Conference on Decision and Control
    Volume2018-December
    ISSN (Print)0743-1546

    Conference

    Conference57th IEEE Conference on Decision and Control, CDC 2018
    CountryUnited States
    CityMiami
    Period12/17/1812/19/18

    Fingerprint

    Stochastic Gradient
    Cost functions
    Cost Function
    Iterate
    Optimal Solution
    Gradient Algorithm
    Stochastic Algorithms
    Accumulate
    Error Bounds
    Limiting
    Gradient
    Minimise
    Numerical Examples
    Decrease
    Optimization
    Estimate
    Demonstrate

    ASJC Scopus subject areas

    • Control and Systems Engineering
    • Modeling and Simulation
    • Control and Optimization

    Cite this

    Pu, S., & Nedich, A. (2019). A Distributed Stochastic Gradient Tracking Method. In 2018 IEEE Conference on Decision and Control, CDC 2018 (pp. 963-968). [8618708] (Proceedings of the IEEE Conference on Decision and Control; Vol. 2018-December). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/CDC.2018.8618708

    A Distributed Stochastic Gradient Tracking Method. / Pu, Shi; Nedich, Angelia.

    2018 IEEE Conference on Decision and Control, CDC 2018. Institute of Electrical and Electronics Engineers Inc., 2019. p. 963-968 8618708 (Proceedings of the IEEE Conference on Decision and Control; Vol. 2018-December).

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Pu, S & Nedich, A 2019, A Distributed Stochastic Gradient Tracking Method. in 2018 IEEE Conference on Decision and Control, CDC 2018., 8618708, Proceedings of the IEEE Conference on Decision and Control, vol. 2018-December, Institute of Electrical and Electronics Engineers Inc., pp. 963-968, 57th IEEE Conference on Decision and Control, CDC 2018, Miami, United States, 12/17/18. https://doi.org/10.1109/CDC.2018.8618708
    Pu S, Nedich A. A Distributed Stochastic Gradient Tracking Method. In 2018 IEEE Conference on Decision and Control, CDC 2018. Institute of Electrical and Electronics Engineers Inc. 2019. p. 963-968. 8618708. (Proceedings of the IEEE Conference on Decision and Control). https://doi.org/10.1109/CDC.2018.8618708
    Pu, Shi ; Nedich, Angelia. / A Distributed Stochastic Gradient Tracking Method. 2018 IEEE Conference on Decision and Control, CDC 2018. Institute of Electrical and Electronics Engineers Inc., 2019. pp. 963-968 (Proceedings of the IEEE Conference on Decision and Control).
    @inproceedings{8c9bdcf590744eb3b38045405b84f570,
    title = "A Distributed Stochastic Gradient Tracking Method",
    abstract = "In this paper, we study the problem of distributed multi-agent optimization over a network, where each agent possesses a local cost function that is smooth and strongly convex. The global objective is to find a common solution that minimizes the average of all cost functions. Assuming agents only have access to unbiased estimates of the gradients of their local cost functions, we consider a distributed stochastic gradient tracking method. We show that, in expectation, the iterates generated by each agent are attracted to a neighborhood of the optimal solution, where they accumulate exponentially fast (under a constant step size choice). More importantly, the limiting (expected) error bounds on the distance of the iterates from the optimal solution decrease with the network size, which is a comparable performance to a centralized stochastic gradient algorithm. Numerical examples further demonstrate the effectiveness of the method.",
    author = "Shi Pu and Angelia Nedich",
    year = "2019",
    month = "1",
    day = "18",
    doi = "10.1109/CDC.2018.8618708",
    language = "English (US)",
    series = "Proceedings of the IEEE Conference on Decision and Control",
    publisher = "Institute of Electrical and Electronics Engineers Inc.",
    pages = "963--968",
    booktitle = "2018 IEEE Conference on Decision and Control, CDC 2018",

    }

    TY - GEN

    T1 - A Distributed Stochastic Gradient Tracking Method

    AU - Pu, Shi

    AU - Nedich, Angelia

    PY - 2019/1/18

    Y1 - 2019/1/18

    N2 - In this paper, we study the problem of distributed multi-agent optimization over a network, where each agent possesses a local cost function that is smooth and strongly convex. The global objective is to find a common solution that minimizes the average of all cost functions. Assuming agents only have access to unbiased estimates of the gradients of their local cost functions, we consider a distributed stochastic gradient tracking method. We show that, in expectation, the iterates generated by each agent are attracted to a neighborhood of the optimal solution, where they accumulate exponentially fast (under a constant step size choice). More importantly, the limiting (expected) error bounds on the distance of the iterates from the optimal solution decrease with the network size, which is a comparable performance to a centralized stochastic gradient algorithm. Numerical examples further demonstrate the effectiveness of the method.

    AB - In this paper, we study the problem of distributed multi-agent optimization over a network, where each agent possesses a local cost function that is smooth and strongly convex. The global objective is to find a common solution that minimizes the average of all cost functions. Assuming agents only have access to unbiased estimates of the gradients of their local cost functions, we consider a distributed stochastic gradient tracking method. We show that, in expectation, the iterates generated by each agent are attracted to a neighborhood of the optimal solution, where they accumulate exponentially fast (under a constant step size choice). More importantly, the limiting (expected) error bounds on the distance of the iterates from the optimal solution decrease with the network size, which is a comparable performance to a centralized stochastic gradient algorithm. Numerical examples further demonstrate the effectiveness of the method.

    UR - http://www.scopus.com/inward/record.url?scp=85062168618&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=85062168618&partnerID=8YFLogxK

    U2 - 10.1109/CDC.2018.8618708

    DO - 10.1109/CDC.2018.8618708

    M3 - Conference contribution

    T3 - Proceedings of the IEEE Conference on Decision and Control

    SP - 963

    EP - 968

    BT - 2018 IEEE Conference on Decision and Control, CDC 2018

    PB - Institute of Electrical and Electronics Engineers Inc.

    ER -