Unconstrained optimization methods based on random models
The interest in unconstrained optimization methods using random estimates of function values and derivatives is large and motivated by many applications including machine learning. The aim of this talk is to give an overview of such methods and to discuss open areas of research. The procedures presented have per-iteration computational complexity lower than classical deterministic methods due to the employment of random models inspired by randomized linear algebra tools. Under suitable assumptions, the stochastic optimization procedures can achieve a desired level of accuracy in the first-order optimality condition. We discuss the construction of the random models and the iteration complexity results to drive the gradient below a prescribed accuracy.