- Speaker: Stephen Wright, University of Wisconsin, (Speaker’s Website)
- Time: February 19, 2015, 3:30pm-5:00pm
- Location: POB 6.304
- Host: Inderjit S. Dhillon
The approach of minimizing a function by successively fixing most of its variables and minimizing with respect to the others dates back many years, and has been applied in an enormous range of applications. Until recently, however, the approach did not command much respect among optimization researchers only a few prominent individuals took it seriously. Recent years have seen an explosion in applications, particularly in data analysis, which has driven a new wave of research into variants of coordinate descent and their convergence properties. Such aspects as randomization in the choice of variants to fix and relax, acceleration methods, extension to regularized objectives, and parallel implementation have commanded a good deal of attention during the past five years. In this lecture, I will survey these recent developments, then focus on recent work on asynchronous parallel implementations for multicore computers. An analysis of the properties of the latter algorithms shows that near-linear speedup can be expected, up to a number of processors that depends on the coupling between the variables.