Search Machine Learning Repository:
**An Asynchronous Parallel Stochastic Coordinate Descent Algorithm**

**Authors:** *Ji Liu*, *Steve Wright*, *Christopher Re*, *Victor Bittorf* and *Srikrishna Sridhar*

**Conference:** Proceedings of the 31st International Conference on Machine Learning (ICML-14)

**Year:** 2014

**Pages:** 469-477

**Abstract:** We describe an asynchronous parallel stochastic coordinate descent algorithm for minimizing smooth unconstrained or separably constrained functions. The method achieves a linear convergence rate on functions that satisfy an essential strong convexity property and a sublinear rate ($1/K$) on general convex functions. Near-linear speedup on a multicore system can be expected if the number of processors is $O(n^{1/2})$ in unconstrained optimization and $O(n^{1/4})$ in the separable-constrained case, where $n$ is the number of variables. We describe results from implementation on 40-core processors.

[pdf] [BibTeX]

authors venues years

Suggest Changes to this paper.

Brought to you by the WUSTL Machine Learning Group. We have open faculty positions (tenured and tenure-track).