TY - JOUR
T1 - Advanced neural-network training algorithm with reduced complexity based on Jacobian deficiency
AU - Zhou, Guian
AU - Si, Jennie
N1 - Funding Information:
Manuscript received May 20, 1996; revised February 20, 1997 and January 8, 1998. This work was supported in part by NSF under Grant ECS-9553202, by EPRI under Grant RP8015-03, and by Motorola. The authors are with the Department of Electrical Engineering, Arizona State University, Tempe, AZ 85287-5706 USA. Publisher Item Identifier S 1045-9227(98)03268-8.
PY - 1998
Y1 - 1998
N2 - In this paper we introduce an advanced supervised training method for neural networks. It is based on Jacobian rank deficiency and it is formulated, in some sense, in the spirit of the Gauss-Newton algorithm. The Levenberg-Marquardt algorithm, as a modified Gauss-Newton, has been used successfully in solving nonlinear least squares problems including neural-network training. It outperforms (in terms of training accuracy, convergence properties, overall training time, etc.) the basic backpropagation and its variations with variable learning rate significantly, however, with higher computation and memory complexities within each iteration. The new method developed in this paper is aiming at improving convergence properties, while reducing the memory and computation complexities in supervised training of neural networks. Extensive simulation results are provided to demonstrate the superior performance of the new algorithm over the Levenberg-Marquardt algorithm.
AB - In this paper we introduce an advanced supervised training method for neural networks. It is based on Jacobian rank deficiency and it is formulated, in some sense, in the spirit of the Gauss-Newton algorithm. The Levenberg-Marquardt algorithm, as a modified Gauss-Newton, has been used successfully in solving nonlinear least squares problems including neural-network training. It outperforms (in terms of training accuracy, convergence properties, overall training time, etc.) the basic backpropagation and its variations with variable learning rate significantly, however, with higher computation and memory complexities within each iteration. The new method developed in this paper is aiming at improving convergence properties, while reducing the memory and computation complexities in supervised training of neural networks. Extensive simulation results are provided to demonstrate the superior performance of the new algorithm over the Levenberg-Marquardt algorithm.
KW - Gauss-Newton method
KW - Jacobian rank deficiency
KW - Neural-network training
KW - Subset updating
KW - Trust region algorithms
UR - http://www.scopus.com/inward/record.url?scp=0032075495&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=0032075495&partnerID=8YFLogxK
U2 - 10.1109/72.668886
DO - 10.1109/72.668886
M3 - Article
C2 - 18252468
AN - SCOPUS:0032075495
SN - 1045-9227
VL - 9
SP - 448
EP - 453
JO - IEEE Transactions on Neural Networks
JF - IEEE Transactions on Neural Networks
IS - 3
ER -