Sparse Recovery With Unknown Variance: A LASSO-Type Approach
Affiliation auteurs | Affiliation ok |
Titre | Sparse Recovery With Unknown Variance: A LASSO-Type Approach |
Type de publication | Journal Article |
Year of Publication | 2014 |
Auteurs | Chretien S, Darses S |
Journal | IEEE TRANSACTIONS ON INFORMATION THEORY |
Volume | 60 |
Pagination | 3970-3988 |
Date Published | JUL |
Type of Article | Article |
ISSN | 0018-9448 |
Mots-clés | high dimensional regression, l(1) penalization, LASSO, sparse regression, unknown variance |
Résumé | We address the issue of estimating the regression vector beta in the generic s-sparse linear model y = X beta + z, with beta is an element of R-p, y is an element of R-n, z similar to N(0, sigma I-2), and p > n when the variance sigma(2) is unknown. We study two least absolute shrinkage and selection operator (LASSO)-type methods that jointly estimate beta and the variance. These estimators are minimizers of the l(1) penalized least-squares functional, where the relaxation parameter is tuned according to two different strategies. In the first strategy, the relaxation parameter is of the order (sigma) over cap root log p, where (sigma) over cap (2) is the empirical variance. In the second strategy, the relaxation parameter is chosen so as to enforce a tradeoff between the fidelity and the penalty terms at optimality. For both estimators, our assumptions are similar to the ones proposed by Candes and Plan in Ann. Stat. (2009), for the case where sigma(2) is known. We prove that our estimators ensure exact recovery of the support and sign pattern of beta with high probability. We present simulation results showing that the first estimator enjoys nearly the same performances in practice as the standard LASSO (known variance case) for a wide range of the signal-to-noise ratio. Our second estimator is shown to outperform both in terms of false detection, when the signal-to-noise ratio is low. |
DOI | 10.1109/TIT.2014.2301162 |