TY - GEN
T1 - Stochastic incremental gradient descent for estimation in sensor networks
AU - Ram, S. Sundhar
AU - Nedić, A.
AU - Veeravalli, V. V.
PY - 2007
Y1 - 2007
N2 - We consider a network of sensors deployed to sense a spatial field for the purposes of parameter estimation. Each sensor makes a sequence of measurements that is corrupted by noise. The estimation problem is to determine the value of a parameter that minimizes a cost that is a function of the measurements and the unknown parameter. The cost function is such that it can be written as the sum of functions (one corresponding to each sensor), each of which is associated with one sensor's measurements. Such an objective function is of interest in regression. We are interested in solving the above optimization problem in a distributed and recursive manner. Towards this end, we combine the incremental gradient approach with the Robbins-Monro approximation algorithm to develop the Incremental Robbins-Monro Gradient (IRMG) algorithm. We investigate the convergence of the algorithm under a convexity assumption on the cost function and a stochastic model for the sensor measurements. In particular, we show that if the observations at each are independent and identically distributed, then the IRMG algorithm converges to the optimum solution almost surely as the number of observations goes to infinity. We emphasize that the IRMG algorithm itself requires no information about the stochastic model.
AB - We consider a network of sensors deployed to sense a spatial field for the purposes of parameter estimation. Each sensor makes a sequence of measurements that is corrupted by noise. The estimation problem is to determine the value of a parameter that minimizes a cost that is a function of the measurements and the unknown parameter. The cost function is such that it can be written as the sum of functions (one corresponding to each sensor), each of which is associated with one sensor's measurements. Such an objective function is of interest in regression. We are interested in solving the above optimization problem in a distributed and recursive manner. Towards this end, we combine the incremental gradient approach with the Robbins-Monro approximation algorithm to develop the Incremental Robbins-Monro Gradient (IRMG) algorithm. We investigate the convergence of the algorithm under a convexity assumption on the cost function and a stochastic model for the sensor measurements. In particular, we show that if the observations at each are independent and identically distributed, then the IRMG algorithm converges to the optimum solution almost surely as the number of observations goes to infinity. We emphasize that the IRMG algorithm itself requires no information about the stochastic model.
UR - http://www.scopus.com/inward/record.url?scp=50249117735&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=50249117735&partnerID=8YFLogxK
U2 - 10.1109/ACSSC.2007.4487280
DO - 10.1109/ACSSC.2007.4487280
M3 - Conference contribution
AN - SCOPUS:50249117735
SN - 9781424421107
T3 - Conference Record - Asilomar Conference on Signals, Systems and Computers
SP - 582
EP - 586
BT - Conference Record of the 41st Asilomar Conference on Signals, Systems and Computers, ACSSC
T2 - 41st Asilomar Conference on Signals, Systems and Computers, ACSSC
Y2 - 4 November 2007 through 7 November 2007
ER -