# Modern Theory of Second-Order Methods

### Prof. Dr. Yurii Nesterov (KU Leuven, Belgium)

#### Course content

# Lecture 1: Global Complexity Estimates for the Second-Order Methods

We present a new variant of the Newton Method based on cubic regularization of the quadratic model of the objective function. This modification does not destroy the local convergence of the scheme. At the same time, we get new possibilities for addressing the global efficiency of this scheme. We consider several complexity bounds for different classes of nonconvex functions. In the second part of the lecture we consider a modified Gauss-Newton Method, which also admits the global complexity analysis.

**CCNesterov: Lecture 1 Notes**
# Lecture 2: Accelerated second-order methods. Lower complexity bounds

In this lecture we present the complexity bounds for the second-order schemes as applied to the different classes of convex functions. We start from the Cubic Regularization of the Newton method, and show that it can be accelerated using the estimating sequences technique, which is a standard acceleration tool for the first-order methods. We derive also the lower complexity bounds for the second-order schemes and discuss an idea of the optimal method.

**CCNesterov: Lecture 2 Notes**
# Lecture 3: Universal second order methods

In practice, it is often difficult to decide on the best functional class which contains a particular objective function. This decision is very important since usually it leads to a concrete choice of optimization scheme with a predefined complexity bounds. However, the parameters of the corresponding problem classes are usually unknown. Therefore, it is interesting to develop the universal methods, which adjust their behavior in accordance to the information obtaining during the minimization process. In this lecture, we show how this can be done for the second-order methods.

**CCNesterov: Lecture 3 Notes**
# Lecture 4: Implementable Tensor Methods

For increasing the quality of approximation of a nonlinear function and accelerating the corresponding minimization methods, we can try to increase the degree of Taylor polynomials involved in our schemes. In this way, we pass from the first-order methods to the second ones. However, the next step to the third-order methods looks already dangerous. Indeed, in order to work with the corresponding tensors, we need much more memory. Moreover, with tensors the auxiliary optimization problems, which we need to solve at the iterations of our methods, become almost intractable. This is the reason why up to the recent years these methods were never considered for practical computations. In this lecture we will show how to overcome all these difficulties and construct implementable tensor methods, which have all chances to eliminate the second-order schemes from the software market in the nearest future.

**CCNesterov: Lecture 4 Notes**
#### Registration

Registration for this course is required.

Please note that registration is no longer possible, as we have reached the maximum number of participants.

#### Date and Place

All sessions take place in

**Room 03.13.010, FMI Building, TUM, Garching Forschungszentrum.**

All participants receive a confirmation certificate for 8 credit hours.

-- DianeClaytonWinter - 30 Sep 2019