junyangq / snpnet

snpnet: Fast and scalable lasso/elastic-net solver for large SNP data
32 stars 15 forks source link

`--prevIter` option does not work when covariates are empty list and specify validation set #15

Closed yk-tanigawa closed 5 years ago

yk-tanigawa commented 5 years ago

When prevIter > 0 and validation set is specified, the current implementation tries to execute this line.

https://github.com/junyangq/snpnet/blob/d965bf8d7bd8c4989c9b2a66f8ade1eee8324514/R/snpnet.R#L195

However, this would lead to the following error:

Error in `:=`((chr.to.keep), prepareFeatures(chr.val, chr.to.keep, stats,  :
  Check that is.data.table(DT) == TRUE. Otherwise, := and `:=`(...) are defined for use in j, once only and in particular ways. See help(":=").
Calls: snpnet_fit_main -> snpnet -> :=
Execution halted

This is because features.val is NULL as defined here:

https://github.com/junyangq/snpnet/blob/d965bf8d7bd8c4989c9b2a66f8ade1eee8324514/R/snpnet.R#L126

I think there should be one more condition in if clause to prevent this in this line

https://github.com/junyangq/snpnet/blob/d965bf8d7bd8c4989c9b2a66f8ade1eee8324514/R/snpnet.R#L195

yk-tanigawa commented 5 years ago

I tried to fix this issue with the patch: https://github.com/junyangq/snpnet/pull/16

but it looks like there is another issue around this topic.

I got this error:

Error in cbind2(1, newx) %*% nbeta :
  Cholmod error 'X and/or Y have wrong dimensions' at file ../MatrixOps/cholmod_sdmult.c, line 90
Calls: snpnet_fit_main ... NextMethod -> predict.glmnet -> as.matrix -> %*% -> %*%
Execution halted

I think this error is from this line of the code (my best guess is it will through an error when features.val == NULL):

https://github.com/junyangq/snpnet/blob/d965bf8d7bd8c4989c9b2a66f8ade1eee8324514/R/snpnet.R#L319

yk-tanigawa commented 5 years ago

Note that I observe this issue only when the computation started with --prevIter X where X is an integer greater than 0. (i.e. --prevIter 0 worked fine).

yk-tanigawa commented 5 years ago

With the current commit, it seems this bug is fixed. Thank you!