This is a repository for sharing the source codes for the review paper tentatively titled "Horseshoe meets Lasso" by Bhadra, Datta, Polson and Willard.
The arXiv link is given below: https://arxiv.org/pdf/1706.10179.pdf
The current abstract for the paper is given below:
The goal of the paper is to survey the major advances for Lasso and Horseshoe, which are sparse signal recovery regularisation methodologies. Lasso and its variants are a gold standard for selecting the best subset of predictors while Horseshoe is a state-of-the-art Bayesian procedure. Lasso has the advantage of being a scalable convex optimization method whilst the Horseshoe penalty is non-convex and the estimator is the posterior mean that minimizes Bayes risk under quadratic loss. We provide a novel view from three aspects, (i) theoretical optimality, (ii) efficiency and scalability of computation and (iii) methodological development and performance in high-dimensional inference for the Gaussian sparse model and beyond.