Inexact variants of many popular and some more exotic methods, including randomized block Kaczmarz, randomized Gaussian Kaczmarz and randomized block coordinate descent, can be cast as special cases.
Numerical experiments demonstrate the benefits of allowing inexactness.
My SEGA slides are here (click on the image to get the pdf file): As of today, Konstantin Mishchenko is visiting Martin Jaggi's Machine Learning and Optimization Laboratory at EPFL. Update (March 17): Konstantin is back at KAUST now.
"Stochastic three points method for unconstrained smooth minimization" - joint work with El Houcine Bergou and Eduard Gorbunov.
Before joining EPFL, I worked as a lab associate at Disney Research.
EPSRC Fellow in Mathematical Sciences Turing Fellow, The Alan Turing Institute School of Mathematics, University of Edinburgh 6317 James Clerk Maxwell Building Peter Guthrie Tait Road, Edinburgh, EH9 3FD e-mail: 1st dot last at ed dot ac dot uk phone: ( 44) 131 6505-049 After having spent 9.5 years at the University of Edinburgh (the last two of which I was on leave), I have decided to move on and pursue new opportunities: I will continue my academic career at KAUST.
Abstract: In this paper we present a convergence rate analysis of inexact variants of several randomized iterative methods.
Among the methods studied are: stochastic gradient descent, stochastic Newton, stochastic proximal point and stochastic subspace ascent.