
Abstract
We consider minimization of a smooth nonconvex objective function using an iterative algorithm based on Newton's method and linear conjugate gradient, with explicit detection and use of negative curvature directions for the Hessian of the objective function. The algorithm closely tracks Newton-conjugate gradient procedures developed in the 1980s, but includes enhancements that allow worst-case complexity results to be proved for convergence to points that satisfy approximate first-order and second-order optimality conditions. Knowledge of such parameters as a bound on the Hessian norm is not required. The complexity results match the best known results in the literature for second-order methods. The talk represents joint work with Clement Royer and Michael O'Neill.