(This is a more complete version of the fix submitted for #1096.)
Currently, some of the functions for the glmnet model check if the training data is a sparseMatrix, and some don't. The result is that initial operations in train() might succeed, and then later in the workflow, a step will fail (usually with "Cholmod error 'problem too large'" for a sparseMatrix with very large dimensions) because some of the training data is inadvertently converted to a (impossibly large) dense Matrix.
For instance, this bug currently occurs whenever prob() in glmnet.R is called (which happens if trainControl(classProbs = TRUE)), or if tuneLength is used instead of tuneGrid for train(), because tuneLength = ... triggers a call to grid() in glmnet.R which does not check for a sparseMatrix before executing Matrix::as.matrix().
(This is a more complete version of the fix submitted for #1096.)
Currently, some of the functions for the
glmnet
model check if the training data is asparseMatrix
, and some don't. The result is that initial operations intrain()
might succeed, and then later in the workflow, a step will fail (usually with "Cholmod error 'problem too large'" for asparseMatrix
with very large dimensions) because some of the training data is inadvertently converted to a (impossibly large) denseMatrix
.For instance, this bug currently occurs whenever
prob()
inglmnet.R
is called (which happens iftrainControl(classProbs = TRUE)
), or iftuneLength
is used instead oftuneGrid
fortrain()
, becausetuneLength = ...
triggers a call togrid()
inglmnet.R
which does not check for a sparseMatrix before executingMatrix::as.matrix()
.