DataSlingers / MoMA

MoMA: Modern Multivariate Analysis in R
https://DataSlingers.github.io/MoMA
GNU General Public License v2.0
22 stars 4 forks source link

SLOPE Penalty #24

Closed michaelweylandt closed 5 years ago

michaelweylandt commented 5 years ago

Recently, the "SLOPE" penalty (sorted L1-norm) has been shown to have good theoretical properties and attractive performance in simulations [1]. It is a first order penalty, so would fit within the MoMA framework. It theoretically has several tuning parameters (one for each parameter), so we might take a reduced case, e.g., the BH-type rule discussed in [1]. Algorithm 4 in [1] gives a good algorithm for the proximal operator.

[1] https://projecteuclid.org/euclid.aoas/1446488733

michaelweylandt commented 5 years ago

@Banana1530 , is this fully addressed by #29 or is there more to do? (The only thing I can think is to allow more / arbitrary weight schemes, but I don't know if that's actually useful.)

If there's nothing more to do, we can close this and #27.

Banana1530 commented 5 years ago

Let's close this issue. In my experience sparse PCA with SLOPE, LASSO, SCAD or MCP are basically the same.

michaelweylandt commented 5 years ago

Sounds good - certainly for the SPCA case they should be essentially equivalent. (The sub-problems are linear regression with an identity matrix, which pretty much any method should be able to get right.) In theory, they might differ more for a case with a weird smoothing matrix, but most of the smoothers used in practice are pretty well-behaved as well.