This paper considers some aspects of a gradient projection method proposed by Goldstein , Levitin and Polyak , and more recently, in a less general context, by McCormick . We propose and analyze some convergent step-size rules to be used in conjunction with the method. These rules are similar in spirit to the efficient Armijo rule for the method of steepest descent and under mild assumptions they have the desirable property that they identify the set of active inequality constraints in a finite number of iterations. As a result the method may be converted towards the end of the process to a conjugate direction, quasi-Newton or Newton's method, and achieve the attendant superlinear convergence rate. As an example we propose some quadratically convergent combinations of the method with Newton's method. Such combined methods appear to be very efficient for large-scale problems with many simple constraints such as those often appearing in optimal control.
ASJC Scopus subject areas
- Control and Systems Engineering
- Computer Science Applications
- Electrical and Electronic Engineering