Closed obruck closed 5 years ago
What is wrong with your current code?
(expect that you might want to specify a common cv.ind
)
Hi @privefl, Thanks for the answer. I don't think the alpha value is defined in the cv.biglasso function, and the default is l1-penalization (alpha = 1). Can you correct me if I'm wrong?
From the documentation, you can see ... Additional arguments to biglasso.
So, you can use alpha
in cv.biglasso
too.
X <- replicate(30,rnorm(100))
Y <- sample(c(0,1), replace=TRUE, size=100)
X.bm <- as.big.matrix(X)
cv <- cv.biglasso(X.bm, Y, family = "binomial", nfolds = 5, ncores = 3, seed = 123, alpha = 1)
summary(cv)
Repeating the code with alpha = 0 gives me the same result. How can the alpha value be assigned?
You need to also add penalty = "ridge"
.
Ah of course! Glmnet asks only for alpha, so did not think of this. Thanks a lot @privefl ! I'll close the "issue".
Hi!
Currently, the script enables lambda parameter optimization for alpha = 1 (lasso) only. Is that correct? Is it possible to optimize the lambda.min value with crossvalidation using a range of alpha values (0, 0.05, 0.1, [...], 0.9, 0.95, 1.0)? This way I could compare lasso vs. ridge vs. elastic net approaches.
The code would look something like this cv <- list() a <- seq(0.0, 1.0, 0.05) for (i in a) { cv1 <- cv.biglasso(X, Y, family = "gaussian", nfolds = 5, alpha = i) cv[i] <- cv1 }