Search Machine Learning Repository:
**Heavy-tailed regression with a generalized median-of-means**

**Authors:** *Daniel Hsu* and *Sivan Sabato*

**Conference:** Proceedings of the 31st International Conference on Machine Learning (ICML-14)

**Year:** 2014

**Pages:** 37-45

**Abstract:** This work proposes a simple and computationally efficient estimator for linear regression, and other smooth and strongly convex loss minimization problems. We prove loss approximation guarantees that hold for general distributions, including those with heavy tails. All prior results only hold for estimators which either assume bounded or subgaussian distributions, require prior knowledge of distributional properties, or are not known to be computationally tractable. In the special case of linear regression with possibly heavy-tailed responses and with bounded and well-conditioned covariates in $d$-dimensions, we show that a random sample of size $\tilde{O}(d\log(1/\delta))$ suffices to obtain a constant factor approximation to the optimal loss with probability $1-\delta$, a minimax optimal sample complexity up to log factors. The core technique used in the proposed estimator is a new generalization of the median-of-means estimator to arbitrary metric spaces.

[pdf] [BibTeX]

authors venues years

Suggest Changes to this paper.

Brought to you by the WUSTL Machine Learning Group. We have open faculty positions (tenured and tenure-track).